var/home/core/zuul-output/0000755000175000017500000000000015144627170014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144636552015504 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000314601015144636465020271 0ustar corecore5=ikubelet.log_o[;r)Br'o-n(!9t%Cs7}g/غIs,r.k9GfB e>#|F}6b}Wߟ/nm͊wqɻlOxN_ ~𒆷7̗8zTY\].f}嗷ovϷw_>on3cvX~egQBeH,nWb m/m}*L~AzHev_uαHJ2E$(Ͽ|/+k*z>p R⥑gF)49)(oՈ7_k0m^p9PneQn͂YEeeɹ ^ʙ|ʕ0MۂAraZR}@E1%]˜(O)X(6I;Ff"mcI۫d@FNsdxό?2$&tg*Y%\ߘfDP'F%Ab*d@e˛H,љ:72 2ƴ40tr>PYD'vt'oI¢w}o٬owko%gQ(%t#NL֜ eh&Ƨ,RH 4*,!SD 1Ed_wkxdL3F;/u7Taqu5Ոӄp\2dd$YLYG(#?%U?hB\;ErE& SOZXHBWy|iZ~hal\t2Hgb*t--ߖ|Hp(-J C?>:zR{܃ lM6_OފߍO1nԝG?ƥF%QV5pDVHwԡ/.2h{qۀK8yUOdssdMvw`21ɻ]/ƛ"@8(PN_,_0;o_x+Vy<h\dN9:bġ7 -Pwȹl;M@n̞Qj_P\ Q]GcPN;e7Vtś98m1<:|a+.:a4nՒ,]LF0);I$>ga5"f[B[fhT/ɾgm\Sj#3hEEH*Nf äE@O0~y[쾋t=iYhșC 5ܩa!ǛfGtzz*з 55E9Fa?Zk80ݞN|:AОNo;Ⱦzu\0Ac/T%;m ~S`#u.Џ1qNp&gK60nqtƅ": C@!P q]G0,d%1}Uhs;H?)M"뛲@.Cs*H _0:P.BvJ>mIyVVTF% tFL-*$tZm2AČAE9ϯ~ihFf&6,֗&̴+s~x?53!}~Z[F)RH?uvͪ _5l *7h?cF_]CNnW)F5d,0SSNK9ް4:ÒozsB<^+鄌4:B%cXhK I}!5 YM%o<>"ہ)Za@Ι}YJz{ɛr|hxY/O$Zøu32EʉD'MS1}t i:Y`cФIX0$lη˽`!i:ګPSPٔ3@5;ȕ}PkڪH9' |":", 1Ҫ8 %lg&:2JC!Mjܽ#`PJWP4Q2:IGӸۡshN+60#:mufe߿~Y,iǑ wVq*T+ w%fx6 %u̩1hӰc%AYW ZY~a_6_yWf`rVA,f=A}h&V<UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'T[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Dsƫ%z; +g 0հc0E) 3jƯ?e|miȄ{g6R/wD_tՄ.F+HP'AE; J j"b~\b$BrW XWz<%fpG"m%6PGEH^*JL֗J)oEv[Ң߃x[䚒}0BOnYr猸p$nu?ݣ RF]NHw2k혿q}lrCy u)xF$Z83Ec罋}[εUX%}< ݻln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1?tean`3-SHq$2[ĜSjXRx?}-m6Mw'yR3q㕐)HW'X1BEb $xd(21i)//_і/Cޮm0VKz>I; >d[5Z=4>5!!T@[4 1.x XF`,?Hh]b-#3J( &uz u8.00-(9ŽZcX Jٯ^蒋*k.\MA/Xp9VqNo}#ƓOފgv[r*hy| IϭR-$$m!-W'wTi:4F5^z3/[{1LK[2nM|[<\t=3^qOp4y}|B}yu}뚬"P.ԘBn방u<#< A Q(j%e1!gkqiP(-ʢ-b7$66|*f\#ߍp{8sx[o%}wS`ýͽ>^U_S1VF20:d T2$47mSl*#lzFP_3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+y_:,eXX맻c5ޖSwe݊O4L)69 War)|VϟT;Cq%KK-*i ѩQٰ`DݎGu( 꿢\cXn }7Ҫa nG{Y bcWa?\34 P U!7 _* kTuwmUr%ԀjƮĀdU#^ۈӕ3ΊeBO`^}ܖj49lnAvoI "%\;OF& wctغBܮl##mϸ.6p5k0C5PdKB g:=G<$w 24 6e/!~߽f)Q UbshY5mseڠ5_m4(sgz1v&YN2姟d4"?oWNW݃yh~%DTt^W7q.@ L⃳662G,:* $: e~7[/P%F on~$dƹɥO"dޢt|BpYqc@P`ڄj҆anCѢMU sf`Yɇك]@Rɯ?ٽf? ntպ$ˣ>TDNIGW .Z#YmDvS|]F)5vSsiExţ=8#r&ᘡĩDȈ\d cRKw*#zJ9tT :<XK*ɤwoJarExfKB4t@y[6OO6qDfEz]1,ʹB֒H ֱw;SpM8hGG&ƫEJި_1N`Ac2 GP)"nD&D #-aGoz%<ѡh (jF9L`fMN]eʮ"3_q7:.rRGT;}:֪a$)gPSj0j3hLư/7:D-F۶c}87uixoxG+5EekV{:_d* |a%ĉUHSR0=>u)oQCC;^u'}8H0]+ES,n?UU{ x~ʓOy_>?/>l8MrHID2VSsMX^"NۯDc558c&'K0L /C5YDqNe~ض˸nErc֋@aw*r܀0 a {RQXV-/p:MP\<=<^越a/bz?ܓvjIg3MN4:]U]STa,@OKdٻҦ62L0ډ"ܺ_z9JNȯ=@oUI y4 A(" 뭗R==9!nKErHc1FYbQ F;v?ob-ڈFalG*rEX}HAP'Hҷ$qM9(AHx!AF 26qxCdP!NZgҽ9l*(H Žڒ;̼|%D Ɖ`Pj . ֈ,ixp`ttOKBDޙ''aLA2s0(G2E<I:xsB.ȼ*d42I:<ŋu#~us{dW<2~sQ37.&lOľu74c?MՏړ@ -N*CB=i3,qjGkUտu6k Cb8hs&sM@-=X(i7=@He%ISd$&iA|i MiʏݸT{r[j顒x.Ƞ"m@Hy_I )j|s#RGI!dTKL&4K>#stV \'xMgaSZNg8>e!^f%cYr]qs:"̊;isXa]d+"v=x7p.fZCg_Ys;pE&\U}ܫSh])qKYAـhhdEnU14&G * QIQs;rԩ.k83֖8Muqu_48dHܥlWW q>fu6+'}xu\Veelz`Zbym gp8펠ˋֆ:1IC8qٞ\vXçL ]X/r}7O}Wh,h ;RQ=]u00yiC۔I^3!?H6iUH:ô 4P$rT`%2Aq-֢׍qt=@x#~0)p# ы9'iri]ͪ/@繁qVGCڤr,DihB ,m 9 _$q3= A$IC"6g^4e`Xo(D*6"^eTh'4xpFڜe'fVQ7~'c L^ԯwIڣA.}H;Ë*׬=`^ 9]r鐃 -Dfi2|QwZk‹u^6DQ1&H凎c!n[mi3)WfsF:M"uҷs.1!뾧1%s,hQs|hx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nV$tHI98/)=mͭ ڐn}}~ק?g_6WĩDRc0]rY9'z .(jHI :{HG}HDN`h7@{jnE#[dz;n#y 9D*A$$"^)dVQ.(rO6ӟZw_Ȣaޒu'- ^_,G;U\cAAz7EtlLuoXuA}bT2H_*kIG?S(קjhg 5EF5uKkBYx-qCfqsn[?_r=V:х@mfVg,w}QJUtesYyt7Yr+"*DtO/o۷~|hw^5wE of7cꃱ.)7.u/}tPTGc 5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG%mgo 恤hˍJ_SgskwI\t`ﶘ080ƱQŀllKX@116fqo>NrU Ѣ9*|ãeeH7.z!<7zG4p9tV|̢T`˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|r,8ߙ!7up O`c Nc0%Ն R C%_ EV a"҅4 rT!Ddpč- .1j3Y QuH Ύ]n_2a62;VI/n|Lu>'&0&*m.)HzzBvU0h}~5[ Cnlnݔn"?W1;|ۇ=sx+@{.*+E2}`0C*0)QsU·8t^O+mXUq6UDöbV]/)fpOj4r@mo$Y#/7$&5m8TK&cF/5qX0mS换ohJ\Uz=b ZF~g&? 3 pnl58IS)`Ѓ' X'0ew9ƟE q& z?z^>%(ȊqmrdclSa iGߟ|ح d=UjZܴ n8=%Ml%>aU#-a`-$fhnqgTĔO5 ꐌSXzv  (OT19U}W3QvS}ll>,ŰAVG Y%.9Vnd8? ǫjU3[5U)OD:*Ϳ5U)=}/'χ"Q_5ILAٍkk7'VlWQVm0c:%UEh\1cazn2ͦ_EQP/2~ ʂK 0`~5"s#PCoT*,:[V4b=]N& B82^Wk9UHLPm))29ʱ UAcBC-|$M\^C!`}M^t+C~L@VC|J,A$SՉ8Y4c3z~$)*417l;V iэy(_άj]$9XN+/Sh]icc w0/ %IcT >toZ X]j?Xن,@aК(!6%DVga9@,'@—>+`ze" rd;L2k5x90YQOe%d1ؑHͮ4W\hx锎`qcU!m}xF^jc5?Ua,X nl^Cv>N*H0ӋG ^1v& ?_?Ho;~3)N.w7_|+qU_^?KhkQ\|ZUNϦ 1~u]ZTT걼/?Ykq{uu[^usqUKU\^|O-;:n;?ݸoCow-']swqߓ'~BmmK?|i [%ۿs& Z&el-leb)U)ժ,{]]Nk"?-KjLOg2hDz O\!3KXb^$gJ1_'9(ΟSYpX-ŮΣ8M,౬cxAX5xM"5XITd U$ZkNbr`C`kQ*t 0gb@񲱥-kc1VnR0s| fXD>hB֡#Э$+Jpᄟ,Cg:v xJ"}C[`ӨAFn5ʬL~ZPKoa_u` BI#DQdEB n+۰Z ?&s{ 6$U|C\l9"و5 |y ր|/#w0~$Di5~V*ss| &8֨O0v+jR,cepO}[ K\6F5vROq5}X-RÈlfXzqd5y ';)VKL]χ@b OIAG Lmc V >_ .z{IuCb񶍠z}(.>LC,HI~%.Op% 8& c*G3T\X)|*HN'1U0:VmoBlf`[ qB|APSO}E`́JPu#]8& 3Nr&}̼f*I vcJlm_m)d靕َ$4opjfљXwH+ƳC6җ2ct"*5Sct)eNq[ǪP@o`co ݎҀ9;Wi`Ε`p~mg˗%F|WPll{f_WJ|8(A ä>nlN"jN;/-R%~ {_'##AA:K`uihj$]pM+f># Ž^A12JQ̛3rFI_"*l§,̀+ å} .[c&ȩ f,M`,Lr5E} mX/)T/ϧ}V aU]<6Yn3rߍʇ)br\kF`XCnl}hSؗ΀ѩ آSOE(ȊD0Mޘ`MDN74Т C>F-}$VA:XB8fJWq&4flq6\X)ى<?Nwg>]dt.am.>!LcoJrKmqvcz܅EA:#u-sreHhl+ [cr:̓?W~cv>0Es%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>$Ȃ1L-%΄{dɱL ;V[bp>!n&-աIJX)U'[؂2 KvQ8 I7D Oj1;V||`Us%Q)L͟^ CLu+."T#yrHhlكƼE-XO 2ze31cDBeWy!A/>l[whl"য&6 ' L,C{"./Ep.h>h8"oRs?]\McZϕ!1hKs`h0OG!L-wCln2&0Y'0=/T4G! H/ͺ|@bX)+{{T//seϙK$s`h nO/cE(Tcr!:8UL | 8 !:V^S7;G{u%[^g#pnX#&ߓTo_=2 sE0ڊSoq:+Ekٕ|dvv߭jjy6X î/ML-yz,ZlQ^oAnv -)xǺč--qcl @Վ*V߳`Ƅ(Nc}*ۈGmU`tiz5Ӄ~ &$yx `iKA"*1 [cplm_ugTvFW/ɚپi?~5I'wofڛT'uP_}9Gsո]qY7_G{Uw7עz(ע/BMI? m-8NJ\NABd!>_:h/AC;?_ϾqŽ6xMY(=ͯl~l8V0٩TAP*tLn$znlq&җ^s RTn|RKm;ԇZRCKTI|ʰ]Sj%-՞Az`sJLV/rʑ=IhM4fG'QIc(<ǹJi lc*a;Yޕm42Зbΐ< X[!$A 4x"9HDjǢ$R=U]J$Hb Ik~أ$i\DeOJd 9H0U+*IvLwA.0~Q;ȋTwOMM9HcRX%UgדOHaQ¯~nLefYr< <$*LM|d JTħK+=;52 ihzLsmkg Gh|qaj־ZZUoJjɹWg::3tLθ5`mh&-(l?aL@D-#(޴<>fܱZ$Drm=V_>``ycz?3̱g"B5)Gzc-  /ɳ3nudqIwD4)KX/RzęfKZZ! \1®M%3 !Z^EJDʨ\TD/e;fpOaZETTzYj-/Q*g #4Rwᛁ*isӲaL+'t=I3Xř/$Ǽ= ܮ3tGLWGB-,,dZ^"xE<@{X発RtLLnX P(QJeEkT:*D먜 (V1+Y ezYV.š (c?-*[|Tj|2 E%rѬ%oUl{#,̊Yk1Rl SAZIlIbME,fد_ 91 PIvb%j^NGk,XMMkкBpC..BlyzX8O(lEWO`W RϮf--%iחd;8j$K#b} p~Rj+%I eD]q-k1E1 ˪P"9NI& ;VAW={gZQ̊s3UQi!X"CNd;=F!bEv2Z%H4EQ>5iX'OPU`gd.*Η8ZJHħKvX5zS Ĉ,ԏJ>RuUo)1x!w,"XwͿ k⾫EDpW\Yv~$wG$ |w?$RMɓSࡽXGOmu"JȷJ*Sg wwDuzN4 A;MgG? gwĕ2!"p {~:>9+>{`ESC㆐v)㼦 O> px;ԟKS8oxܡH㉷YVD$p{>s㧟lJt1 jܺ3ɳnNscJՕz#/nMta- l` yD +jr- }. [u TI*~I bMfx z~cN%"D%~՚H&n 7&FzM1 H(L>^ZIqY?G胗T \pB?EEo/$[=XqoG'2B"ƍ "o}ш͎;صRT*W룵 &Lww"+I ӳ.u-,ۮrB 0,ԻT$QЊ;PZvRYl-@X=foE,u GMT)q IC=2(X6GZڮd3XҺj %>g69b!LajY% 9j_ΗuF'XR|x$(ziJg['%⚘W 6˳)5,Ґ ݟ7u%k(W4v}A 覠x ͑KM"=Yf~0rFOm#i.yŠ3()13@QNn4ڛ*0zn3@v-߳߿Lӄ0Ajq]*i)BI*MmfuեWШ(nq~ZV/hqDၡd>Y@TٹJK寳 qII5ZKrdEӒLÃ718EYKz?o>GENT8^(-YÃE^duNjB2KHRfC)a5q`3MUVF yDh0d9.mTFчʻJ8kL^ e8;Țg8sc(~Jݗ5ʬp|rD\5)htMtK};<yLnˁAWMl듀$h߼8hHTVf45A<\usa?d:A܈L61QBs31pֿ6ׇeϦeMI$ZZᔛ+ص-gTV*641TA*^4c}FE!џgg:1t{ Y(-غ@V=ݿeQc}K]sEIO\P'GH.{$U(:$gϢ߾Eh,%'#7W7֒Ȫ5zu,ʵm c&.V"-wjfʸsIطʳbF/J`"Ϸ1Swh#Me*޶ZJBHe0s4=9f=C]U>by)r eKލ>;4,,Q=L/yE-,?JѧfDsa:=d^`:KuT]:t!bpt{ETѮZA%&䊶–ޓ[7D(5

s y4VDj;Rs0][go2ls(1eBmӔa b7IiyS%_0m;@4M"*gF&\QuEew4ٝ~ cm k@"VK"0/`\/ہD2ap[ 1R.`1ָnS![0R:d=s(6eK$OId5XS9MY@3Ȅwh;DҭëKmEb>n{q|}N4Pb7E;e^BXmxዬ!c}&TA%ƱE} dao5y#$ZG{o3 mYxQ;#g3z6 <,/3<F{0baYK&&4!=zݻ96{7 "({-ݓ).5ܽX\/ٗݳa.%/u0s ͋\g?v2@v7m6afa㸣GF,30qV#9pbu{mu݋!W\g2넬nZ2sNh0?D <x{7뮳7ÒNA(%<\) B|'YADMTp`#epH(CHFX} F{su{F7,k冽5qǏO@mI`ܭJɣ?5^eɨӕv`!1IKfBڂc83==!55șq6^hMb{29_'s8#lIcWR_Dp$0wca9#UK$fm=koƲeAҾq5'>v^ l6CJv|;KR,{ۢMcfvwv,@W_aR?r.$_˾X[ۏ !w./({i$M7BׇowPW JKHA<+!v~qm壘 P럧o'*~Jw>M(SJNj l)Buzͪ?a3E:`V>y]G2&pFt:i.~5%dj{V 'e+8 &QlnxzwHEky`հ}DlunĪVTAbݳ@Iv+Hϭ69X1"o?=س|88}Q~$8(OQ^߲X(#@1Ȏ" Mr=w&a }` N3suGm oE("Hc K?eJc⹃h7)Q2 @qD nŕϖMw)?I<3>^} {0.0rzgJj1ҤVCZi0 zHf1۹C2>wTdyF dPׁIC2?'A}'bxKfTCdEںH27KN^8%'l nݒKٗ(" l]S$w{RD4q?IL%B@*F &B /K _ ʸJaR)R+)dwaRräH5"|jVI=@=5IMgŀu*֧m86G&fJ@D.F:n-Y:>KM[Ơ\GIҥ/95.]t >Cet# K@3ok =Etq83+&yQxޯH{Kv?{ n'D/W]& ]&뺍[P&d6V{Ds;9 N!qW/6K=gC@^&)r&YIJvWatkF]. gbPTrlGp|YI;%qsLh^FŭF/ֹT((b̢ "":*l4Jg_w;Vc`^Yp%g <'Bz6:͒[F{iu-V+ %rv445E*JܘEbZV'7:ˊbNkSnsIPP?%rI%lV}1Ґw}KmBApND5AunVu8B~p^u9@m,쓗oo&ZM UFa8LpF?|^AF~gIش(G-P%rB_c oڧ_1f CGinDhp |.Yus=uJ)E9lo_sVOVH<<_  n`ˁChG^\eS9 l\("*#ՑP`H_r0ru"Qށvjx|h?V wZZT>H!x:T[mIs܋V&x oRx-V<Ȼ: &]B:\rj{6EcER46@lPQ@G-i5IWE1ߣhb;i~>Fr2qyl~/$7w )u4$Vq4X,@W20hDUz)$~AϤ09>H'|s1v)ʦi۟kD`VdeJeW|8%@П Qϱ)y۠h`6O"M;N@WOLVm7|1&+Б6Nb!6*= yK4ßxY&r1k}-"VruuԬQM`} CY?.~VNx k<~tKl |Ni<' ²\{,|zZª(P[$a 3(u:@:2mG?rE*tjpj)kYɏ37^f:l*W>t-&T8Ie}o`o^OyU(r0xgDsuZ+.&C<[\+/]WiMl ~]C2xw 1_z͆WfyOp߿Y^@u }h O& *ab͂Pm)Q$hg"c@oWVhUM.DiJ]tN٤5p M(aCԤYʈn*ү}[pԂlyx%g=w2;o 6lrP\Fy)_d$J*̶ 9ȇ ϗñ"{CViH:vN,k6@e"$o9*y(h4]KN@ sRǞC?*&=w0n#s'jw ?}O?oowD'U~#weX3uV?<g@kY j>=:*V(hxܳ:sܳN(cd A=e QUsX*9GpPܤ r~%UC+n&du|+_b;t/VJXMAM^|u3Cf >vr3ts[IF i:.Mqi+2 =aNwde\ff 7ӛH0 *C"c3X7۾ff 4+rteHQ/cI3Xڙ5zOHiz+@@R"RUq7D 4QI=bai=t(J]8TFE0\S_I>5b2.1\S*7l1)$aak'3?=iq7Z3]Ns,:P ;&+Xlm8;EK*p{b} Jyb7]0HWn@\_3L*ߴPvPj[>U0~*7J7US52?L5!B* `.2],)oD8JVɹvsP& b^L 0,A{T*SǸf||+ݲrIΘ^ gw̯_3+sDjTdQ92-bwi<{j+-gSV:{սH&x1ܚlJV9 n,(_\TRR6e$TSsORc"_;89?N0-]p\A7Â{!ٴ4?BMJQz)5|PԧCUG]5Sf_D:u~k ykеmstiIG 9i&pu=J̒TET$U=*`Xs&Q1wS`A낚6u+dNvTt De9MTkiDTjC}roɻ6W|(Z@[b(P {J(hVWȈ3l2![N<|G;g?MC wz{ {8_tAA47zSKxE-Ēn M:SV?￾җ~=VO}Gy؏P[{X=rƸqGxN&?m׮ǛĿǛ_GCpL[毓(zouӗLO |L&}ugxp߳΄`Gsx(n5.o?~'vWGˤ'ΠZ뎝e;L"sEo]છٷY⅋Kqyb/hmO8B'G*LS>9Wg&x), ^ӴRf/Mi̺Ίe^&ZΫQW30؁i!b.sӀD; W%?_QQTRU+KY ٳVRZf[LoōIlcHvCbMͣ,Ue6't#L5/Ǔ8E1]fR%)xl1𖿪1L#K]cuдj\hUZaZt#L[yCj.[&8@Y0mYp FAX Dڏ²PuaaMQ(#~rd}PȱfO`4 p BueUB10M[g0m>-_iP N0p8 P#/a $gks H#2\ 1\^ d1Iֵ#}ݱ9i$6=npz?d C4`#;`{k$:qq Ss$F{ȅO-Ifdd!r*0@sRLHYeFp8b$X<}cx]%2FVUgӂCReͬR GeCX-dž%T,XM){iSM]B I` tF*pT?TVhp.5 ~!+-LY2o#\O.\BbĞ4ûgu=$Qf83%1Sĸؒ`-i Z13R3KY =X %l!iy775Zc3-t*&9 :T}xݪL g8FϨLD]51]o*ʱasl 6XY 0obd ^p"IgG2; a yB^t҇VRv_>`$η-hrY:G/5߶`ͬ I~|~Zlw? >>Sf8p g!JjɑWϓZ{֦ouBO7f:% vL"Rx|ͥR" jI5d"xvNq9-Һ68iZZ52 ^HN)F̯|Q ;\i>pz|Λ% Nˈ) *"{yvP[6N/RwP)#5ItysSc=I9fLTL09Y :{)َD \!ΈYUjNi1Ś *y|ݮX]2q2q>QheWǴ/ 1 }`İ3v C` J#Y4ɱZS`GHu*zۛetB47}mࢎ-*mHz x *tgC`R` bZ&WF[.VSL8%%P µ.לe7Lrj0"Ks{I,Hwg-c-Hˤ])`Vc(#lj!0(qa^ :GP :uvQhjKsA 7x$mD\B*ÿ?kZ j,pFJSxլ;tX/^#1J#sёh&.qf܂ڀm1;;Y*fs<(ߟb+]tIÔ۬4>RkkW,JD+Bҙ.>68Rˌ!z,LkaظȚզ Ӯw%/|ts!p?I? E#c{jY sx)ʹCBY0co(\!rQձZffShAV F {>mpfpi@%0aVUL:z;E5h&tgէz2XS%>~[?&o1o[. RRA*_i3mk0te"m F qY>+?e罏56s9Ȟϊ,xQd9>:kgV FzȷV+!)B[fSdw PX $boVӶSԬrǛU& >}CO87j]#}Q=ymbAkׇ\EMf(tO97V]hĊ,4a$9?ޥ{I`V<mQd. ~,<9Xnp VUab=)<\:fZ` G2ˏpjԒˍK 9x}#$p|Gk#7YL_Xj 3c| n qD㳎 KΥMՁ\z}OfU0ʛ984cـNK˞'3G6$8L1(cLzἏnuHIK"F |o_{y.R<)W͇[HpzóWM6H<詥cxج;06yD51;G׿u6gfXzK ӝ btg}ַg5tWK uA>Baʂ-,W&l_O 4B];LbP#W:fot ^4֠@h~R֓Q>dTus%Igvc[(X|2r O2\݁Fkr<Z&{$ҋH:'jޑ\kGk{-IEt( 'rvd[%ױ;.b8a~42+;Zp *-åDb!q%JZIa# $(䣗vU rn痖Hp3rS `\-مL*ε^.ŢRVzbNkty4biDW!Unfɛv q[9~U1+Ӷ`Mh$ZpTd"#|޽ܓ$"i|R/_ /8\8Djxh1 OcaV<'1 {Ip%TTp)σZ1vX0F A]V}^Zi?rA Ѱ()S⎢*3_@KUAn u%kȔ5j,~W Oɰ}tXQ%z'"&x帥z09]J 6sc˒FmuS?ňa1-V]A}^Q5K7ta?̺;,e${fEzTKY˓ DiLM@8,cy@p,aGv/zIT瞻ZMr=Ru&Sv jj>;D']h)RsHBl%KD#$S\nIplS.]|٫LUaJU=/}y9^=cLjچf.]v!MoJYmr<'yEL++!$ Ug]Ii-$DpL}mZ0Z{+  )0Rb-#20)"gZwFAiSm] \XߗD7׾ ⏓Lr֭d*Ip{IfN QRt!s)<,ǪÊ$ख़UGLlg'ѨCra e~|^X jT+n BUZϽ?zWw%,rOLFՙ{Ei-MJ!{e yqmd͒VЯ 4{WDhMҍ2O嵪xXkz#%w~)$;e)0]u={Y8h4mh`b$m/盲#B~ɸ] &z#ؐd_@ :~aDU#Vq(|^':=ad +ИƧeqJCP58^J56=v+Ro# 6/7;]5Q^DQT'y%"Wخ*ybUpȿ^`,ڑjf\j'@оAcd2y,8h n{L޵?~chI*(Z8bV/Ϗ.;ǬN+r쁙v36 42?;\=MzcnŭmXAabYxTN#.:0W˝r r t9╃.Z&w (83]p(hR[qz\u夲{U6Wj!Y*%;FC3>o}0nwV_θVpU-la]*Xn^\] L:tjRWhzȱʈ~p.}&yvUVpT(P.8)9?/)b#D[]Md =lFBGF߽xъ ˺E1?5B5Y:[^r. y:Mⷽ4s'}nY 3w6RX쟋FNK~9(! /?Pl19qVqĭ΃֒2ez3Da0| 1}?Lڎ^xx %5pFNBRD0D͡ I)i6 q+ 1֒5TrUTA< n6hJ 5*hD(LX1ZW(Nr>y hW#3%^*z!qcZG/tħFYzsqw;[. 5w-/~׊g~uүA>>HgOye) YtZ7oߵncᡒW/|J*KJT`kD$*I)Nq8q\P/ */+x!@ƹMr'-x;ߔdX'ŬXLpinR'RJ3]a&ZK |&v m9;v}_JHM,XKP+bm ZKCJ+-+V ZG MiwHt6cܣ5i?۵w%/˜IXf`c5fG K9H1=xC.Yr e"/x=_eE1ƔuLb?hG z XcRnUNh(Lye3-`B67"k[l{%?FK5[IAiA`՜, nc6iWHy 8f6<źkw_ͧФ˓ۛ>2/FwnWi@O+ΤI3(D&`̵ CX}æ~Ϻp+#Uۀ6xh6lNv"O=ջ=@Gc2"T>-QɸBPWf6 36:2ߪsʷ׷aKTBigo1s:f-"*wN{Xi婦6)-~( ӣbDZQpT ?wjq8`92[p% К,U9%@Z'Y abVYD0Lc&bd1^Y3VTK3Vfh+iF֌e#٪fXS%t(Nc1x:DX$NHe ddq8YY3VTK3V^C3Rx4cIjLU3bh*&p̏EXFDszEqjP41+kF݊jiF7 f΄9f,ujB'QH#!`PM h*Nm씂R(#fԭfԭ|Q6WW?U?s"@`sʳ9!?/-GG򓊲ŜVDp^]vb" .WZs (y8\NQx/޾z1N {EΡXP U'yfC9ݬ9B_N=_.hs,7V&Op?UСkӷSXKs.Gݎa{pϧ0(mL4cYYOop5Ng!C)VuC+=9xs#`0Nb#V[CATbLWv囱7Ny \LmQGq)aN;.Nѭ_b]YS":Hw(iS0BMgM?a0F__o:,Ktru!֑[dflJ @|bEDx,ᛝGjYxXkAE6jY4EYWAn\s~, YwH^ Hde ל%UZ_eϷK},k|-ĻdTr~i?0$ݟv%=ի6FLޅLj̋ou[Wt]N?^u7F_Wǥ)j~fxwv8ϵ0a"l"ߜ@(G @}M9[=Ƞ쑱;k~wD<1Is(#YG̙ ŕ%w5(zLj|]_͑+l#LyHҰh/KQoӾ/ NҬ㊘cӇB'K8*x9Ɩ EVXrDH3c n“33M Ȫڠ~d@:br,ևv uXyi:th3I-MGaa")0GğH+5PLDkȼ,Ƴ IOQ%|&1b6`$sܦ Θe*FBǒIBq)Kep\*j4B#a?$ PiH5,.%6v9j5.E "LRqxJDRLASӄ2*g (KҘƚ 8;R8F0!zc\7qn0-V;cJSSI2D g5ibkD8IEpJ1v"ߩzx[ڈ6%po@6(F5@\m;ǯ (Qu+?,x圴Bs:C 5q =sSdE&y׹!7L3Kͤ;> ՛+*w~JJqc(*P!":w?Og^Ugj;ů7.u#Oܧg} y;cd;]]*;'of:'i]f|9 WM_-{cGw_y' Ao8ߝA >C5|SBMoعkkҸ:\k!aB.V|x=LJ|NނWwAO<]3v6| ;ګ>W FuAT?ޔk&aAc\]dSALp5 ba &LZjWV>L&2Պ:`pfqhpŴP9R9Yfܸ3Jil{|GX1Cg3R;&9S~$a3lq-wD*VYĈ= ei˄q8e1Kg6WI~I\  V-K(q7$ʶl(b;3;;;+ĒDxbu)Y)΍IZ$ #k=pu[A@IVݳoa$KJLQsEaD"cpR,5',VH (k.ihh8Cn`Ĺ"92eJ4)RIKL4G0%&2Ϙ2L0' %d$cZI 5TfL EF̗"hC574% K2\fXcwXYsnyFQ2!6RRj0Uր%pr <ȥ@@YLLf 6G RBp* Xɑ $KRWMQ K$Ȧ7/0HqCc5b:qBD:N(&8١uCVj@s=RJQK띗'O)Y-AXa$OsIA0n2HSHPpTXN%x`;m]QX,~V+Mu+E!3L;L?fEInv,QU7:VT;I7:|Bݕ'Um렃HL0Mm+;./Esl%vejeNt1!יqpsRPaM@qosL6泡*q7-2瀏lfuL==xYmHa`Au#`d+Kv\5YiMBB)!IiuL%&u򨧸p"(ˣA'v4zq 7'pntnS%n :7Rw4܌xIE-;׹9X%jہGB⒵g?Kސ:FK) !a5$cvTqKa\+WQt S(Q$T2i s q%sUBTB0,iH*CH@*O #\ܸ{ei8ʌ!4%)4攔$Q4-_! n>\t悠u,B3!8T=[bzH=?[//oH ?#>V&wASMPQnp+Cu*zCc?.FV-\ 8;Hi;f|)IH.(XXΜと1w3͹/ ΍I3Daų;FtjcEHa ђݛn'bFe_ K 8$Z8)S%D,xf՘!ں XMg c-nK% 7"'-%+ZPC+E-TǪ% oI",c,bZDoϟIoZ3 l;&ݡ1[bD7a-M= l#ɫ N]R] Q.QqAİ+ ZS>M;qs#=`[ `8mńx`[בa[S @ hMqA?iGF !̤!A8ֶV@ 1Z1";$=&OVvEԧ\?;ÏM fEBd^˶L7ׂ~k0p[-BJ[<Z``$!piw8GH] ?ٶ]%Hm-ӆuS?!_eV++ {@`?q`PdoXY_܌S[AC_~ߩ3-M>9L0sO:ׅ%S70`ʑ0M>R@rp u B?oMIϓ>'0-*!J;v5 MVJeULxIiӕZu73`.SXd7˖t>̬:ҭбmsGԌsG;3dn;U̬U1JsoY6v6xL |&Sf+#[5'>L*M$VkvE{SՁ:_ W ~t25;)M 5D 0$)eퟛ UsoTԮ^慷G%96[eTXgS5 D9 ".' jVFxOtU^FMXcfTVU'LsؚHܱw)oO!3s9\K"vwlɀj9hK@A + 0`ZnnV HG= p Ph}"\y,"UnS;,D-z*S+AsҶZ j"H ɡIyN8 Ҷ{+;v _}VC"5Y։Elr{=PڄQwwzuVF<>%+0,Es!s_Zz sgy"*^{Q<_/28+OfL ~!bc/-Jea}/[u V؃=ыO723Q8 8qd<ӑa۵ \ `nSbxނ,l$wJrA>]x8v9ɲ ʳ&3Wj4`ߩ [1^=뗳+-92*ѵ36EZziu}kE4KSDi0is̵Wu\>m>wj/>C(l0:}hJ` +Qr8֯m we("ɵ nx1Sd2*}m R.s=p5[b8N`b@Y0In[ ul4OѭD򥸞*M4?Z] vNc/.m~EpR>yZ0<Ŭ9q ?|Yot2qqLF  l^'ͭ3\l8*U_m G)/oѢ g}}ކ#0wz=QyzM ģڟ#iʧC VM&F@w/82 I4[ٲWֈdd> R ךZKw{u`5uY+1O}smnEdr#k]9*4;)Skz Xe|1sSbbQ0K pv37Y ̺tVUMyxR?ڹxKUa3W `Y%Dѣ(8b 4 YgТ2(6>T.^!r/˿ q/3gI{$n(`x=a '92#eF2e}uu){?m{߻S. `+_]|a1 Zb~X2.!8[6eif?Oi.. "&$4Q23S fRSFD0b )crߟEfwkD7DǞy1N{\z QIfmŽ˼,^sH)_]?'ǘ9 /\y{߀2Í?ȱ=PC[8Ű>db.]c1[RfQ$'gH9d ]v^yjS@jZ_9Z_W ,ޫt702~O9_?;.R2`bgu^ hDnw/_8xMU<ƓЇ5򳒉 KY>^( viY8ٟ{^9Q9vE߀vuql4s>W Ke.u h_7qe0r9)\:z2~ܬxX2ٕ8Є/Xmq1<"`7{L>{V6p{.i7+cc0wtkPZ".Mdi[xmddz*]I6>v%:o7 8S_OE|]VAذ[ާF;SCoʏqnS\}֫7dYű6*5 cqQśզ𢪖] .<ʰx;we=nI4}XN)C/zv#ȳEn1*xT+bQ%0ܰȪ`dkWhW8j JJ4R(ךfPd(׏lGG00/˜$cqk$26$D16O9W}̑VR3m[hu8x1CлEBa. m_Esq5CȵE%FN8Aԩ{&储+վf' ƳvLg>?Eoyo7b~8{@~u?[Zq|K˗ٮ{9h yC)OraOgPE>]@GxGBEպeorA0ZN{ >?j([6rKʟˌ粭%=u0~GywOt.qq'Om}jfN^S8[Vsy^?ׯ[/>fgXk9EmQ3Q1B ?5GN?MM_p; *"gEt z*_mUxX]ϲ;ER1q'sg5ϯ}K$Uߔ܆A񫵾_|[|`)fL-Y}^k3wڳ=V._M?> ju6JhxXiMmX_$ց}2 Z]>S8'+u oJ9TǒWj*T56MfE^2>O{Wob+o李 2xU.U2Je+ewr;d~_fff퟿D Z{mJAd[ړ՚ >[ԋ~Vs홶B_>~6XS8s7僒0ي4/onUt7MW^oC[ɢ&{fqliU_]]]]ڈ4_?L$\`6탶ӽ] W;=PNEnWѢ} 6 <^~ d0fOfbe4hrZ$ *W4"\+tuM7~N&!'&}\ń EbR RXj0c&ˉwz❵lBUe#2}ߓtHӂ,N~,eiog)A՛TEx1g Jo OlCb芉Pΐ>Q JXDTk9()uc>m%vSɷ1!eLXό'!\9(4x O'O9- JҶkKJ@C%$EV7g@8AWJؿ_pX$,,E6imU,TWlغmR#5j۩.xHq~Sc*{GH?~W 1敾DiJ=>XX8DL!5R6]_K(O°qABwg1f\X<x=~,Y+t#19}vOa9,%2>P䌕Ga(fU!7Y}@`?l?k':U6VѰXl܃v}c<-1'jܦwh#ޝXY!jܸݻ'쀶'X`s, Ppn\0~ǥQroo2`[a U u3Z~U򼂼G~fåN5xv&c^w/+$mtvcfۈ:aK0W}Dp"_l*kT:8ӈK!jJ}Ib|-da}`B>T%SHR Fp)$J#TȠPa=JR׾Q# 63x2Y"HH"\$~kiCX #+m1Ro)"(RBFdhq0r,K] mKDWz|VPHYdeGXeN`! C #)dqi ?-$V_ C@Foz8Uha]5[')ryeSSn02DŔymk!C #)dF`B̡>-^=f8k].\9JHTV`D*CkAC #+hEη#ov"(m^ĕdTW{vz!a"7ҙ2[$qU,3XKX)6|5Gx=wy_6M?Yi)GJh`(⼣qK$2$( -AF "+ZyjuVʵ׫ I>}V6z~_q^O_g+1?YKFgؙժ}B~$N3e x?+c$h,ɐgť"|"㎼FmIF{>uNZ'0DԢ^AE.FsC)bD0f.d`K0|[|K3NVWz8kXFا,:xP&i1ݧc?!>V)Y\XUO4 +M吹dVFՕcy-P*En%43˨4C7 p@}=`fiVF62[,G!a9 y}Uа[;{ U +[i.;f{Ď'vL~f/zI{#c4rPpY k TdǾphWӂ.ϗr|f8^|C3[- yw uR#J-d$2NbR=vf6h"&"j v]ʳW>WpL˚$ݺe\"D.\Aº `t Dw3VMӍղm5SL,g D@?sDay@F*(cgk.hprGXqX:{X>H;_yBw EgwopyI"T:P8DRKǵDs4xT<y_jAm<Ɣ̀ކ>ަ`% H䈕HKz1Goηr0YO Tϲ|+YL:ɽ+2[WʽOq“cA': Gxgg/Ln+t1,S۸XTZk⴬*r|z1ojv@'.RzlӐ%;sW*:@d eȯCHt mǰ@}tDC mcvAe訪Y^GĤYG(fdk[- Na&l0%5@:sQfz4iw3B -Ug#R7`eZϽyʧz.@r:sN1sl2O8tw?UK?h4ۂ6kZ|>~cP.v#N6vN N:bl0=e-Nz:Ig iq@x5p!Wȇl:9 e!0<7vh«*pYUa4דr~x-hiF|@oacH p[( U޹8ch]C>ڟ1 #ِFU*9X;16d4CѰR}Ű;w񯙎^'CBszTOkԎt*{̢qػGmSZ V㔕+XV.E"$un.#x uH7Ќ0v~v c !?F$hyȎE=u̱9GK0"y˛t5Xqa&g[N(J6:&nlN<[aP  !)W>XlA&aM㩖ַm_y_xec䘰&0{1Gqu/Ahn|$" d%V!Mu@ g +46[pS85c7[[w[G,Q qI14%[qӈ:h"+rsĂ.v9n$G;eQZG¿pv9IB'vEB:mK8gQX) aBh< "c烵tO_3CWʷM'{);8(y:1UHQcNDR|sPw1h&!m;:hfLm~;rviUw @jW+ TYH11*7(i䰶*iLwm/yv AHOAͶu,IcקZ-UKlS͢xb}UEF?g0^1l|Q-{\`#a bϺT'0<`$A@4KlX|C~+B= YX͜m>[oFh~s΢8 ,%Gk,fv@UG @g9ھ,Wv,/~QF﹔W+VqGo r/~{ rTB[r@+&4s.ee0)A6bۊ3_QF"W#ŸA!6 8IVƐ%u<@|l-I=s67AK`TꚄ  ܱJ`9NNQjdGc Q"[hA[ش"LVj`"(0)voѧ4A B5TqGvJyƊ ru-T?.r# Z]4|^E[I1$W3AYٞǡbv"7rOx,٥Εr ).S"<[jl2E*-X!?rQwYrSLTRiR@JGe1[VR ]5DNؠ]p \ncr'-0}@UNSN-SINxቦCO X_ʈ/ۇ.^َ 4ܽ}M?PQ.%ҵ^xO5,w>ޕ53 ='bvO1O G4l1jSV%C9Aq>^<6dFU x0cN.f xsۦGe;bI6/O {/7=S/Jr%:c{[ãZ rBa<.F1i`?6L]E{@#bNu?Q2}xg7jDcb,'Rۨ4VsAvpP\Y}OyA/͞/%F/ٗt#xȸ#ַ E0*9_H9FYD:k$k5cܛ=Z1F"Wb"wwT#ήwn*G^v SRq@2_AX Zٕ8sC֘6TE<+1F{eaRU5 i5Qu;[~E"4`&=<5bq"Sb hھA#NηdoS%z# f/OG,{e@ POOK5sɅwIyr[GDS})߿|h.F╼jV TสYSnB7EO/BS ؇mA)bh>,tՐD;ãP+"S>ҚN"@,mZ6$/Hú%5 ývy]mm[6s/֏zC:r]QPb7k-ǗA^:F+5"HJyy]ldO4~M-]s:?* fc.)\7e5mxaJRykeh[dSe+d4q(i>UU̙vl)Ra&cW;4vr|ߠ# ґ^.J;#zli nVwGswKI')"T7`yݴ)$8k4>~7=_^bNTS5og*Ԙ}tz4裇?jXva}GNp{F_FHP+$K,Xr  +G'9*מqHw8 ӟ#7@KQ(G˵"憑:4>"U8%\3^ڋ:,N? 1S\DD=74j{Qg>W~ ^M4yWZOy&j$'J8g8{ͣgQf<^~UNnhT+ON'n N{˥չFoE;bFwӘDF|n(*)WČ OOG)74*x̾U[d3t=Xݸ&+J[-j Ңɮ6ܖ'hˉ|K&LCOK3Ƹ}v(9(Es6inyv?.˖U!F,iL`(ۗc4ZZ"5W Nְz؆HpG QĽQL)^Y ˪W׾˾KM(%,Zӳj 6U}>ֹ=X~J? dq<up]C@! "! qj2) OYa C0AE>AIvOc ߃Y̤QUXqx5)nނC붤$U(" "XFN s f{6lc7P˧$TݴU?Gc{!x7 q2)H[抰J&c&A9E"BT)|M]ү{nG}kytb=(uYvb2wLaMca^Nq>EhcПeyH7[nVϢzw~ q&B͜]egO3yg_[Ll>.bbm!S֑}ŵtp031ąjˌ U YgNȲ}NOa-3]cH+H8KjǓ; k 3A[(o/8p(=CUX>}uv vOc NLmgcIuȪ`eOϘяI2_Ug"P@E=ٷs\{6C5lF'bcCܧOܳ?I.}$jN tqIHgv-xͥ{J}N` +;v5،֌PHc5Ma"-X q9Ii+Fx3*W  'ӘV/ΰ|gs9Y q1ʿ̸QA |`)UePщSn;`G=b=) rKay&hG1F, }c0`{.^[|z]:{}4RvIHMH% JYA׎ wuu~Aro1Y .# U{Qh*#3:_gMBqQg=I gΞ;ɶk72]Ik0[oׯn%غ1F +<zt] qi4幰9< F$D J$tAя>L6W|@+ik $XE.I"ZJ Vˊj+31OCkNl>Skk7 oZ恗 V;Lti<鏧m|?IbƧ4FQSἙu0E4&(xf9 3iva?#~};x:{bw4&٬Rd`V ԰`;F0pR, LwGv7bA!hNme4`T'ɓՌ|ǣ:Q FnI66[ s~͎$.V );"9ELdy}/G16,Wn7uXo|Y~ o_dsJBWŇG|Ͼ-uvj4ة'j:zj"pQe>.d:1MBʚlꉭf%S+Zc?ï QPx/Q&WwilX]Z= ܁)\||vg! Y!2b߸Ť4 zhmOs{ؙcU5GִtWwʚ!JYJVe}+AGv?Uzê^ ߻;}`v-U4N!+0$2i(W&S)RJwza0褁nkz!c X^ZOO$ J0Y Häu&{Qp&v$:yg}m|^h}kUZUg<)jd׀hWg0y9 ;c(+з/gL'7\D#Bh1 wg.WgiRVɬmkKS`}Y?YU#g8  n F B%3p߾3 J][WJQܬ]oꝖ}%"lZMe6>ᾗ} 0jPxC,Bej~ۣ!> Oˆ<>Jݽ|(݂肅}t@y0(YK:W<]W.eQuaL[?h޼;Vٓz/tj݅8 b)Xf TB4 `1JGqן7?tt/̌uҰsxgjpoy>/a*FgjnU|Qy 1WL^pFS0:6A9*7ňN}3779ߚ.UΆm5 25fl8t*6\ni<י\MW oe5\XkX^l:v4㱓lcT~ͳf`@j%YN6??Kw횧`yG`ZQ 2^ClD) }.3>-[ Df դOsp^}9Ȃ5{m(.@( Uղ?죋U A˼*YzKr0Q?` y#_ׇ灆^W-SN;Aт>Z?t^^^th{"|ߢeHbpK YIBwOua-{5 }ʌP9qFHGDRD3$"H%~K5t{a^alT1)+^.0$f9jdP}hyU˴j楧zW=/խHr\.$:EaL٫f{UK$>]IU'ė&鎽4!"Fa  DʏSnmOЧθwh`]=!-`hƾl ș+ȐMLb(M]"R B41cS Q6ۀE}lC{@1S(XbA( aRSc_Kw0pת3G+1o۳ W]fp}^[*P\d=ujj[Ƙb88;>jft/'.Y=*뗻@i~ WgVqm!݃Defθ}eڹh'c\H|eVi%9J8xL {~6yE0Z(I8Jg0PE> |jsjn#b,#blmY!c |0A)2BHi''æ'#6E)cXWm'c}RԥÌMy8H'>AxLoIAM\3lݔf&}TOٟͤC:24b#)(f@)a61Pu*Y/e&ZUW@x=F7y aH`7$`<:w+kI@ss6!n+$en?c.=qIL*kJָ+ .P9vqBPgm/{ %/c08"Nq1TX8d7*:/@A8A86֐1 n}b~8~>s&>a)?p/+?]"I7Lřt~bSn;4l_4spl/ƛzʴ$ m>*mܩ|x@MQo#/$,]c`_Ri"IdWmOϓ'&E{:X#4L3hExGobK'E0&%8д l)؎.=6׮EN!!A1YDˈM\T1a gC}*q>JiG1Q CAl}B A:tr:Iri|I cf+ϊZ s6N{;M yH܆sMqIDiX[oIo%0ZR'x7c@Վ,rdCHѮ3d'!gaՆwbayr8໳/#Q{Xax֎6:8sEWWVI,ނ|hOYVt9Rȣ 3I.ߟku|)/3CE v`!0:U/LUB(h|Z*Y w@pijJN yVvs ꖿJ},'hXks7/Lɤ6~w0w U1 #PU"}1Q"6T"<s!ѻFv$'>砿0hA ~KAb~ ufMU-|U_{9XZ"41H#&c`ǘǨƿ?\6{NPoG͙ ,ƶ Mo ́"&% B`BPxoэ0x#yR{-D}8& ̛;(wxlȸiƅ&!IF>ƻc'!E:X;R4dLqla n>v~?N$3= R#SGrM*Œ#Ч>v2JN.2>i- ܽa|L&N/fK8vb%϶@; ɘbQhc5&! ׉Zce<@Z6` B`p gptge;%̢CEFS2$ȁUK(Gqa"E0HHddQQ`DFIV/}ӄ8%T`XFJ:8(Ie|AKBmrY]\1 /vg/b"eI}/YC;74ɘfF#W>qA]V>|By|+N :lr,Ij2h{ Ue7_hkɘ@ES^Vw&wvqR-IxRu_cm0Х=np&3n00uY*Q iСbU_x瘂ٟΫ1‚9ux+HV0AqHsO;߳Xh[hC3JK+$ (MVBq` C+% ҇6y,OT7JS'U+_>T$bW%K b0$9:֑AZz6oЧpf8gv8p9ᰝA.4EBF)5шi@MeF?@(+Y?ne.?i8 X-|.Wg}RVɬmzp4_OVsUXeoߙso~-ktav*5jkRw ]wZ/W,׋sXeor@(W[8tOfZu;v(euUȼ\o{4}zg~~Y[`u$w?f*/ r~l`i&C6|d~YTݨ}wt;.~2VP:7kdzHO+l=:Z5Rb,=̵yr& /I0Qr +/I#"Pq :}$iLJM $ X&OM̛:rf gH9,B@ߦ|^|_ՏUNL{H_LձbEbg 3^11۞5 rVy>?fn~orY_Tk8!iذf\f _+M'z^kJa^«UuH.t~`Fýeǝ,.U~N凷jQ: CUv[&-֒|ek5SXvUQkü49"aMX/k0Nbݓڅb) ?[3m}r}Lχf ef~HB6;Yf4}>#Eq] Uղ?yU A@Ydm^NuRuN ?~ĞnYVR6ˤ1cD$HȄ<,Ӕ[YjD܂o-ZmFO11mwnKS;9tb8 <(a"KoG-CIS> ũԜ] BEK B4LĒJʕj:O 4; ekA.#ڹ'cg!Px ^܅t@K$|sX\X/l4,~_;|?{ȑ1OY e?>,% Z!9YA{N5)RX(zaK:_;}yd-;YK.ݗ_Vo.<3YrɒKWW;RE>"t_#P2&gƀ.ĉLbGn{lz@y_ =QҮMQ4҃7_\?Ǯ#O_`.70ʹ=h/$Ԝp8{E-yiΘ^6Ν&OIQlw[B X\Mo`"Sg!to-2^&o74oD9ɓg<6:d4c2IT`kq3MJIa_v%dGý*% 9 JBFF~DB&Q#sD,y-g.Jl%,wyC3HOBGlaŰIN7\qs<&a3oU/Jj,]q?xmM>y'odȺًVtKjoPmh ců֊2]ZTOd@9DW"qYOcOGإb e#@\~#f*iLȹX%2vq͒p*S2]=Wg m50M༅S ~S^r?̇~%׃U=dq"xw,fmEԑX/"`A5Ro8 ßA17wC×m=AK,_?—|q~\ x4QJ)'V 8xPgPy9|_PQ/=:K~O$Z5!s&bbIb.*Nῼ3&H+'-u}x~x~v4>-rngON7d?4~]ͬ\YitZU3Ts;B1"@U8d|HHKIy2K~^% qMD#ğy_\\#GO(wϟ~P >{/5ϳ^Tg)xUd{ӓ`uvkpx9"'&@>sQA<*@_,OcVV?OaWZvU/ᳫ0 AB7! 5F Xn ËeR%t=2tup{d4ǰO%Wy#.0ATIFuA]ߧ(Xy8:Fj=ov휷t 9w9FQİ9s^yZ4S/0hoܦS'I?̊Y ?2j{L¢Z땫/dl|8+\I BX×sva'*@&kpB躕P # tH]ʬEZ+M"'Ia/lԨGFcp N9,tEREPr 3`pP^h1ieEp8Pqq{ a"]˪`C"p摃,HAzKq}ML^(s2۟luV@ukx-9"ǃtYfĝe-:"4o :1 DTCz)A8lˆdMH 2e=1|z2'/ Em qY<͜.G㕙akpMfLӭhO֕7-&T'8bcAQ(O:=~c\v(WG*NH%Gǖ`篛΃ ņ.d==~c\?']+EFo krnr[ Cۼmzì#(hBɦ_]4"qmQ$ w0L@L8HUMY9,8Ӎ QnhaGFcp80^OD7$=I edr1(,b\׿Fo HDm*#18Vv-,s)(wHi9SFK +iPǤ[mQXL'D6Bln v6 8%1,TnOW" KEI)qQp43<B: =h /Xd"Tx 'oHө-a@ ̌(Z%[Jy7Vne4Lo~j.b} uٴ٣rNЍUNpgthmpl8_ cb,ECJ\H2C_>]ʫ{j7BvHq"s6ƱB6堐qo_-7IQmJ4c {5zL0)*]0`sÈF>v"]ddd; KN7Gw|_^=ѲUv.mTK4׏f}wNq$a 0YI8'lTxq̉ u{7vy`,^&,v i SP#18k1FvD_Z98^*K+2'BKʏVk-mWZ &E=2CE 9.G&GFcp*d8/,*hPP swW|1gh ;N=NHzt_C.a|86Hd"; "PVFcpD-z &WPC]!@t}YlCL3t zv֝h8qGo(}@Yb0\ kG֭hYC]ͧq[>9=~c\bEo8`42^"RTPWnGtt6Fd?r.v <'.L)j0R/DW)+`Me ;+s"F( *EG=2,-S?7] ~Q*ENL D,nMBNn)ܛQFO["} HCbl.h)y3,*˾oBQg&R_b1ӠEh9[ J=\33v@QXBn㘲^B, ~FUx/uh HsQQI$.jǀwIAm5MV(xSLB;"CS1YxnmO$3,X<9|cT z@Ī]To^ΧY<(K-wXVRc*4)xQ\u(6lt=A8ǭa}?iEwGFcp(@r;a}H   ^?eʫ̼@%GNme\E1->S5j-Ӊ _-<)grr]|GFcp*4zHFe ƇsXzd4늝*WL&X6 ,nig!_ d=2D`$,'YG2$-^P[Яȗ#18R'f?Y9@iðQ!:@W# !Bi bS%civ v<ү9ل=##뻖ni@MN;GFcpHBUIQOVW勤(GF[phkxpIl;O E,#}~+q?L(P#18ڮWM[3:?jc *'3x)`tAJJ!Ur"2maB]]*r"bDyJӫ(s=2Sΐ\T7C2aǵT4 1$ Ze B@ssDc6ZK(=2ڂc*oiZ ϲJ@z#8k"Y6/YZ6"gjW~@_#18zn&D֋G45 +i#)8`upEQ!ػmlW[4ܛmM[t-67DjFGv,y&E=5˞d,M,\8?p?XH#!Bl*.H8؆ uX8D>dZVN 2JtٶH&ٸ Yq]X,?Ir 9~S3Vw!ķ?PHiyXjP)@4] %a׮u1۫ά v$L=i}Pǁûh>!d:,wd:cڿYőʁf+*qwp?e;(z8] <@B*,;L7ZjeVH2v%n;ZEv(A^n5?zAZ`}T5GPBÄE602+1wPluE P=6[ͧ#o( @KR?y}ջm~1e-'/FWyPLaY~qK-ii'uͧcԂ\AN Í٫/?| ? gY^?mo2o=›;Y\1-_\zVӱѥX xF?/γ M^ [v~?~$ɳ'颴.ya?W˺ TL3S} ib\SRu0V2Ľ7.yY|9 'Nu~zl42?[mߌNK=)O`tcYԵlUa5MT6=\pkЮSg]Y=POQldu t8fDEs# })Yv<|L =^hS0/' IWXi>qrMA̢pm[ouQP;Ԧ}"5~rZfWWȺ(T_;;˾qI{i&s&qkSbHUaK9,ۆbIXxџ1^,x5nj땡ki R'wE=aQEsaOz*\,+vW'?[~4'`_ukU:pfQH\_nJ;ԓd|߄ й kÚṗ|śL'Evi^}>^dcY~μiL* U=UoʆZr,_l w 5`@p/àq F Hub D_,4:҃ѽM>Գ!3򺷨ݕ1y'2t^b#,z O ]Mg(*\A֟LNN/`lH=0s"^짶|;ލכT=x X6t;5`[G(5Cju+=YP40Kݳ(&b4$6dSNz(8a*WF0qRP FDVaG;PC;CZlnƣ [aQeEDJFHi oI4$4I%7Hu,;#˪vrz To!"ߦY0ah9BR b G m[t<iC_C>WaݕVo@5@uhj-^"օBzeƗmR8w"gG<xG<xF;\\<8q_v`}vŘKs#0#!TGqQ 1O*=lS\4݊6 kݹӳ7!vCxXv~ذ?v/azq،t^θKUP|Lg?R4 -iHH1UbXX٣V/>D.} `M[NS&%F3!R%2XiM!ΈRRxMt=5i7\n/>՟=L<\{$ĶbHl'H==h{1NSacZ$!ZqRdzLl3Be]Me;fڄda}u=?X2)_ߛ;%Y[bq|"#U8xP{t{GgYVE#)nuJ C85q+lxDF![" /Rn0$Y*_v"LavsW)&[,#JĢ'45 VT079Y43R(PYZ0Uhd4+ǭumԟ)IB,  =-"̈́!jMqRX4 0y/k6(2 Xb"aNIP)b 8I8OOO}ڻ=&mcvm@s:^tNA-?M+׃KGUo(\?z|ZϾ8Yۍ h̅0;;eKR+[YA>_Ojo!ܡ͟(j \ƚ(]ES5ܡUe9Ikl>HIљQ> X֙]dM#V\뼓M61I;T9*f>ߎ&+zZ`lQByMP(x[ 砎sɋ_y_GX?ú7ǡb55NOW~ޯjlUU3UP߼j"ie EtzRI[YXr DȻܼȊ_.("pVU>gsO:'RY$"V}*%V\ʄH ;Ҍ*KU~Kwpˑ_SJ8IySdU8IXI c#xOe\;1#{9r"ܚ??-d:8==eRb`HCp0M q"T( aAP]LbEy,2T|tbGZz0wj_t߳Y``s$!\ 7mt$>~hgm2ԞSP{ jOA)=ԏL[{ xzO9B5.ٓ0xI< p0xBR:{< 'a= 'a$ ~j{OI+S;;YU0.X׏͠ܚ}9>N8,.)5uloߜ|LFeѰ!HeC=1-^&,s/﹫Α~*W[tpϪ=jϪY5s'i='I5<'TÓjxR OI5<'TÓjxR xR Oe<'_[!bOI5<'TÓjxR O1s) Slmi9 x2$[RP FDJUjQ㸻Aqc#}Z^6_Aa)g7%u, (O'.bWLҗ`Ѥ6ߕP\N~Iș5qIB*ihM#+Xh #yqj}*.7)J/xϡuBK3((;=L9!G8 UQkWM{b^Ay80;m;-Q u~S[81Ɉ~^n[9 @+ ŸU+ڴl! L[ى{i0sI=/WV"PUP'eknn2@%j6`%4bG?[ SX~l7jƭՍk[&cw@ȡ_ z[-5=^^Nb_ cHV6`X$,LGv`%|* r˳ƥOf/673O}D.+|>6Zv1`qS!Bg[hDnPЙь9 0ϛU-FJR5˽MldDv{:[Akp*\]Nwb}B-hvʻiJO|g?>< bRc6(SX8Sfj)Sc-6I )+D;#A)AG,JH<Y+g׈ =C(hT 7`V(2fx:t_A:# m| M=圷sr[y9o99o`Uvx_u#_>q//=K1| V֟x&r`Zc"g(F W,/ d_PR } Q P Bg *XVUjUdHǀpT`#)E\<^]ðM͉&a:}P|po^JPt ! 8bc)x p8#kI9((Xdr6/Ξ@mbG@B(`C K)DEYLiՀW Q  s0Ld KeZ A!1F~~$U*h1V`E ;Db%%K͉31l 4$iBRǚH(Zؚ# @ZmQpAbˢUP5"b9I2bpu!7xuQ5Lj}jZEdB5$J8Vr8A^%WH!AH@J)K +0iB8ҥڿ&>)& Bz sZE 5(O+L!$o(5щH #|dFyLyG`@0Ԍp#! %QERuF[˧+4UF-H .btH &h }Um_{VUm \[5 4+8Y&T3* q"D P2""&ZH0 PYeLDH jNqn>撗Azl/Nahp!(0,s$#9aB =LmuF7& K(Pʪ}d՞!'^.+E#{ yhɤ綆"ݶ`l#H!-:_6\f yXxt-6Ve>ǃ;袋kwBo<h<ۏ ƨ'TojY`u %4A0ɵ r B($<3@c+j$>hp=х V]}LXi1 ;T0o D30 qk@E :dݣtX` ܦ ұyH蕊Ȳ̰4.MKP`@Aڗ#̝4S 1&1ZPgajxV+Rv_ܶ7|IZs@η9!Ѭq%Qwrc\}3t%O ]l_ _,\٬٦g1et" s7fA>>5z:Xz?go?o4qHپα{$yySDo;ʼPwn&pm)Hybo%P0 d;:mqkrTat Rl5]t뻫y瓫$Aa8k%C!nL(bf-P//7՗ IiNIe!XfJri=]DnfZښ>ҟ;?o 7'$z9k?3Rmœ?NۄW ~KChHbVmHoz0d0dևܟFi  BOVcGQ z|"wk$MҥEXH}4vѷ^6.9fkbVqCo{/]}ߥo^^_zݛ+LիzK8cf &1~?4gm -Z9k+hK1hRҤ[*o{?~z9_oUsy=nV;X_| j~(gҲB w=k^۲rٍ[R]鲻܎V[96<8jI‰C8Rn^i Hw,*[9rEQ5&BP'^D a¢:am7&VSjy%a&DX6`ȘB PTLk".>`_!|ƪFB.mǁDTT(ω8oɸNЀO#pnbbcSԋU=l9..*]hK6Oh܄GN'pݑOEK! 3[F@j1D[s)i >ĉp[}]x,1NڑdëO۷=s,([|>jcSC׾ůcjf sZԟ#Og?}S ssnёXjǴn9#ndguۼjadռ3ʨ4. o߄o7-(1͹0V"rZ62KƔɽܒȝSF$82iU+ͩQ˒ɼRv$9}v+5b+DuCrbcg@\=aBs ˮo\%PF- <է1uuT[tL=,PS9t'Zs.c|0,2gD^.:i.4g "md#UG%}\"mm>Jhx#"&{wj^V>_Tȫ*6l%;v\wgVkQHuv$Z#q;^i{Ifdd[ȌQܧΐRmTx#0VZpGn[upn[B aD ٶ'!:pto4bG 3 eP S߂>#\Sbd5:iZpf BK!#8"A}@#{&CDd 2E4^JJd 1%0li4]k_h{Iߓ+I7"q [mZoay-7o~&gv*W8rEXQI@e1Iai0X:+wI9j;b_*7\Ck6|#)L;6*sfW,)O~&fR&_n/a&&3a7fN;Pw2\T n▱;Ak|Lnp0sH[]D‰uXT6Ѧ`P;K:k!غZGK!zƌƁZޓ6r$+~zE}Cc`X+4E$ny*EE"%ЇX82̐Hn NV:Nar+՗?j\X_ʿ;I fz/=))ڪCd@<!(&l^_VV&c/ &K=`MdKϻ8QVX: w=0kDAld\%*(WUe #2pؘV~p;r?γw0,]:%ce0[OEI1fq--y::-PnVh~D4gQs'̸5; *"ՠ>tvzS:~4vb\LǨ W1y[P/C;z̟S:~hzqJ_ڗ LGY-zg՘IDk1hFs+%!?j4i5GKÞ>u:jѢ*f uRd}a)\:3)ӣv gAPVQ:}xjF '54:8O;+|y~3;-^n:OnBo]FgԶCefO7ǝkn: ^h[eeͬTc+mWtF/Vl^:Ybcht-3~w?w߽E7 =_)g !H|sD)o6ejz UǻXQxv2]Im(:r:~| u`̏ݨ;$smy=`)E67 3}En?񄴱#".""+<íqe?_(`yZ`T9F$㑅H>HԔ1тFQ4`pHn-@GR&l Gݻ3ȯ;յmc,ouZ wh5Zµ&PfTetۨQ1,%! j@C _9'/8ώNxt_'8[n$,I#7˿0L0AH2\sƂΕs47uL& rK>]*U }_ckD4L.]-rSTNm>nm~ -vE,˕olRuNK`*j`V4 }NZFSt́JWx>]v *׊63ig,#)RFǤ"9̰4.MK`@ϾgoJz x.g;|6֥Ufg EgiÏﳏB6gUgR )+pfo=_z(bي!|_c83 xSD:CԹ:u3ۻ'tBOg@0 (XK kKb{[]UПLk\qbeb^QoSφD7 };m ؅&7óG9U!)~ҙC?XfFRrQΉp|B*s@eYzr1¯oo'\Ta=)ܘ/~u?zۊ(?N86ޚd# C! 2{ZbJrO: Řݛd⨨ͣY7jZ9N0FHm0r 8dm wp6(yT5@S K`~8sG`ǯ`;zw^?\} uu׋?+۩B&x0?5fCSŚ \ %׌P|^mav-/oإ0g,"?)85"&+m}W` 7=n Vge["E|T^ ejJB/Tվh-k j[iUI6<8jI‰Cҩ5q;Xi EYNH:a<`M.ԉ> W ^뤎H9oSxLV̦^Z̈́\:uar;ܑ}khh' d Z[QxKhϚGk3VU_j1Km;M4g>B0EN93X[etutXem4J*K4/hGVv9=?φ4c`{~f-e?O3)zJbݾk;tʩg.yagѰ[P4~YGH(򹖜N)bxЄ~R^B=0_1F: lp6HFj;^sXeOߎNN>sŵK̀7'T*R-vKF77~Syw_Rљ7FTҖ\.Dy q6;BVgw yE;ɜe~Mha5VQeM&ޖlj+\slk>n{]tl~=fSr؈^*L*Ӧ+GaǬ!ފ< {쉧Id )ΨРx"m0ZxOop=mdW|Ңa5߃67|H 2`ӌ25c5ޓ,[7jNj:"93*m| Dc2cEaa a Q箏WG܍J‒4O톲U S3КTs[@Xn-CÈ۲e)#>0pXjӲ9iA}lM{Onc&Ƈf9A5E!cww.Z$Y਩(5SWKvɤ c e =E*abνuX=jJD)nC uDY{ ,`H\LxvŢe~G~GE>ƣj|q~|[P#!u#7AͰvc.ɁL/;]Iy؈nM7!,pװ=4yEYev.}6ܼ4i^}sVN%G{Wm/)`na9FHl#Qk ]w$ !-k׻tl+$d`|AE{g ɵ$ -[&)i/^ x` Xj:mBQhE<ir̍,gs0AoyQkfO?gi\7e/PU#XịU 1``(3*h2:ڀmT 5 !cѓ:L͟-}3!%kT\y+ j([M & $FP.+hn!#Q& yO8='FOS7wlKbƯ2<):KÜg~|}4}?+կ:IY3{ DV J3 xSD:CԹRL]nj; l7_%P0 *:m1( !wI_!%3f!d96>}ښ'K~ o5:(iY>~]]*j>\6Wxr}wv]?yhJwNt.D7el8_ޝ?J4SJqorgoAUSCQf̧6AO܀*U<ʜʷhb?sSOœ/n.ls0T0՟F0]%ꁶ5#Q2N~_ xz*$o)wl鶩 nP} Ld˧it2\zv59{^I64WXm'5x$v`c:E.8Gg(7mQ;'~-Mljv՛o{o7o~BX^]%P`Q8&7 t~:llt?Ѯm]κ$x~9r neavd*7g B&?fmLrz{7?Ak*ȀNN\6m9'l"?GDpޗm/H3>xpdΝ*Jm?_byCPI&`L"(( | [l&}&<6Nbpf3ֆJ9,Dx8-.x\NN:-48*(a#LO9 Q#h E:\XCCg{bА43S7nk x?\W;ҍn*ѣiyѴ6ȴW K_"Ze4n]`xn:briSk`)IJufR N? j$2F<1g}=gSrBCL_hK. JxS8 %#8h&  ҟj}9ɒR$^@Ĩ̏7(tG,IV D%|w]7-}6NM' p(۾&77Ngˤл-ĴZ=N|ܟ3f3Fs]@H@R\5kLRZ梵@-Mq)~qJIR%MX30l~u2ѽɟyd&R"d3jT(5J9IG6H3H8f~^ͱ86f P5[P{TM* Ց 'z8jtPC}@g4.PpLj[91@N׫rAԱ |>筬}KGRfNK1sl dV.dځp}EPx|f9XZ8L, [/t'i7L, 99-;sZ+u~"oQnaZVJp/C-goTYg]+WNR]&CI epwrLJ0ARfNzOdޅSK9<-KÅP=Y_a\!`+(vŇy 7G:HQoWY,yAMI:7ƅu:SP՞S04c̸#R'F9uzkR0婲Fig gvq 턠/0AFc{[HgalP K@ 1'"(Xbd@;dȌDŃ%:pAB+_ü9Q< {vݻR.E|# m] M1@@Lab>N,L.VZN*+H,"{ a1\?HJ/{!20k:0Iv_~//%8ϛsєTJ/SL0cK{kKC~8!Ʈt@ѫovrآ0Do%c)>ȷ*X/F맼a{J=Kq7+/r&}]G --רUʼmk<]Q$/ 80L%mdϠgZu{,Zj)޺wg^Mj"y+Jbm jP}} zçg\ڿ*4q ²d=?@j)(=YX#qeI$)^m}F =7ʆO3B=~Z'Xw+y~8{i}I}J8qGզO7hhALEՠhTगBo`ܓ.Oy x7O;&C'Mvܗ&XKѭ1cy^4uq\z?-Jkm4wDNrFW2{&xۧE4JJ]+[{8ea&pTW}3Im٠7ov:O>>8wHpmյx#9@~_}Lb'6{pr~b̀.ܼ5pڡ7f'\JP1 ErIU~wno|x>E25d_<2ewMfZhOHGON'#E&o覻ӳépՊ:/#:pHU+% K0Р^{)qA|=꣜_~.K7ݦ$?(Β@l]L;h\!sb NH K~2oEd/ =PNg^H6@WC*@.hNQ)Vka7> b#wwħS@G/\` HI>Ā$EvC!Ģ!p2V":0I ^mwˮ(Zހ]:`+OSJE;윱MpCX"Idn©Iΐ'ʮxa O Np!@ #8$sCZǭ`}BuQ$B`})f>Kiu*9[-$Ug E"HāuY٪D<2X[ R^{Y2F"PD(9%"!VD+f0? 2?WҪL}*7w6/:Q3 nE0^{涷ff,gQ=M}ӻEަU['i wp{CH,i;9?a6j A=x;|;-|q;뎌Gf757|Ҷ.nƫkoѵjzsn+ϒV핍ii;W`mԱokVӯ^xu{MWyJ;=NQkݵtJt糡Α_K!)GY =/ѐIBzjW֪#>vՊj]3&vz{kKlNaߵ#irfKx :8z^GRmS>Sli)A8qBy@Xq)Z˸ͤ!ShAKTs`}vݦݻ9=WnZrIد3w;:s&E 4n‡:SJrB̀LcAzg "mq| 9o6ױ-l|4J{R Y]e™cy<\;YWGf)) COlU԰j.NT(5l,ڶ y|iZ֡Aʽ )f$\C/Y;8)bB>{֒/vL p7}(Ylά+;ґ~l P+w˫i[;VY7mR:F8֟[6%|fTSq]}=};߭xxʫAQơ?pҁmO{`^dŠI /:b=iv!(Q: ?H4Ԋ M^tJNfҐ*-eJiRN JAxp%J'`C3{YM M bmfZ ͤτ LIv $UM hMDf823Q 7kr<8Km)y4iڱ&6 wﮡS~ŎdԳ/\57U1BҾѱo%ܨl>ކPY#vU:::He7t\]ĵ_¶ dHR8`$xYL!ȈIi uU zhMP){,8?@D h~D3ˀ'KJocx4|dE4wӱ:cm r.?=K_ͯwI}kH,8q{P(~H*=ULqC\Wev6Je8d"`ˍw[)XХSAH`oq6?n/?.Ҡb:J h>|ҕvǗ\#>tZ$:pw~Qeǹqbv窞;` BN 62BQ<|  . 6ޣֻЎvd!B5S?3&HDLK ErYSZ*IbYWl(^u-@K;,E8<5IDę~%X,Nӱ eQR Rk_qjʠ߉5[CJ2/Jbi?7a"(bpX2̪TU27JR9 tsB j/X|Ώ4Iq\(w1xRIoK97/t\}Rݗ ~I5cw-EYϰeC]ٻ޸rW ?%rKZC&X, $yKPي56OՒZs$Kst>._M/#Ͽ= ٨j)ȚTDY)eoY2_in&J@SYVEQ@P9FXAm\&i'nqzf8dm~!<}U]drrz?K~i6+ 'W hmB{rfFSjW]^SX͑p`y33o\ 2QγD_C* #ax/h4BM.;a'ңJG9Zz?C1i)K[,j|ЕN #GG:.VW;#.GsHH%egmMA4<~n[1LC(bU%)}5 o?GGjdz& L-k%f0as$o-NTueE %T'Kk2 9gx}d?9Fwnw/-ZqW)Zn#Gv #&?$!nA5>_kTD*f6ȐO!axi4+tWU}YpPъ- Qs9F>1W61%+no\b)3$p'uh!ibbQy⮄igϑ0/| 4?on_љJ҇}u;sYa[ } Oow4Z6P~.ْIlghQ&['L-A-YxK@xCcyhƒR%Oqsm;{?pyޠ6 Vx"ċǨ j QJ u" [dnYu:8lMc[Ԩtq.LZ[Ak_`3:ǚA AU̾-H{/Z(@3`|ɈWN2cKؕAƙ$c;dLr͐ķM$'8IIjqw[?u20 )f2-d{'Zl'.6vH[p2&AC0X{~բYP~\,ͤfғͤ=^-;BHz_#𕍤S*kPpJUEC9ԜB¡P8K'WOSD4tC-ש/ ~s]s[U~dt c=ƊN{R+Gqm|~@,nxZP+٨cj- K%sJdMB׊ᖘ !Y?oI-_˪䑠YM>/(Wl"ԾG)WH:/`I!G0/>}>@u%]df/uv|7!i}ZAswffk>VJ뇻ciXc}3%[pE~-_lbko%8,5ӺnިcŎ/׫_}7W>>W(>99xup57.%l ·!y(R Mآ,P-צs#|pK}C)L ͮM%֊Jâ.GX؃bc!hVg`\ `[7T7}srBg-~Q\,mҞ^=Q\rMk5U2ZKf-W\*g,.+/jF]Ⱦm]sm$8VgbB&b2:&aR\pV4$M)} :Eٙ<{ݼ¯3M;GVXR7cR >qeΕΕ<>~̩"j#o'8ԦSF&?ިǝqaG/Ď8Jr6ÙьkXf6ΏlB"I%LžɦP輩Cz1^C-kv uQ .QRŦ[' äD1)RKZC{e݋cҝy=؞?t5Kz3lnF_ Q=r!ztut ʬ җ/8-䚓/<΂_Y}du^xܳwF]b~H l*ʿԻ7.؂Q!;kK =#MhCˁX琳#E7ԷQ#x bͦ >hFt9P/v>|=A3$o=_.ȻvynO\vwoӭ>lŔj^ÿ~t'=oy+_W;u~d/;gs.<4s/7]5J?5ݖCg''r>Q'g?vo6Y_|hA\4nunjlr{p1 js&Rȁ헆 rtĢ}֑iV۸7_ݵH*__^?>>>ut.eޫk裯#=̖/[fn״ؾ}9]HJxX4y۱׶<1]FXs m5^,\ s1̗3{b; k1g6btn`ڡs׈CK}UHM ^Av6ybdrߓW_<ݷm:ڔV6ݷqeDWp垁Y?{WF c_f&bK}8<螇qOÎÁS"i%[(o(R2ݲQYė ›PO{8[Ig 0j>g O,RM1 ygNȫ)1TMja@ެu]4Y'!@'}=}-װ'71'P *錷I .O z'\OXa˽R۝/^/+=g|Igbs*i'.w޲Ǝ>wdF hE"i ,ҁBp0Ad.K]nyAER8-D2&:JAJXiltG,"T6Z:d3s4 J\˝)guiFhu󅹟dC(MF@#DI1 9>F#VPk,9jՂ[(4.zZV0w!!>Xi1 :T0o D3 kL5( OH`2E@ߗT^G*#R"%1dNF03,*͸Kb  B' L^}kB^A7C{qh`sy%P ` q42y+=FI/xV8pY_jj:Đq{)5B Y]k4Q 1ce^SA0@vb ِ5vCy1?ޣ42:"4xF BB8L _8ÕC;ݦFrd\˻äy)>O9w1څ>[e1vQnɫnVMy㷏3&oCgVdRW+^O@O@2J; +Rb C"r5BW2Bݕ)pgWU뢾6[  N[ &5iatHU\[azs\N$>||}tʉT-_uz;u?* iNWWVpYR)9qr Layҳ&?uVnݙ6O)FgʯՅlS&sa̷ {7ӱz1KI`8~RUz7~N27މwk i i~eeF`X'4i.&ƻlkӳrΗ4Laf#oñK]ߗ36 wU ) ЉM; ;7uϿ|_Ϗ?_>~D}?f?LKG060M\?K>O 6n~kں5ovk!Vm>u=W6ᾛ>{~*[:{??JڏfE.OZpZ̚Jz;~jϬNf%֥ri4ܝ\"BL9#Z}SNpbB-;U4q_Wo"H1QKB\ND9 Hb")Xf$ta|_Ra.`.> W ^뤎H97)W<(GcffKfJ.N+:MZQ]-&ZӉD:0DXDBsv.4_Qnd4=t"2mUt̷FD, Ӈqw~KnX:J(E f-@~Ze:j4gp41g]B>zʸI^C}L}r Tns>&t#}mQMt&.~_PNg0ȕM~n5 J}[`]юKK;qv>t]%Ѷox}O iJDLTDLTDLTDDvꙨ?3Q=3Q=3Q=3Q=3Q=3Q=3Q=3Q=3Q=3Q=3Q=^0 ez&gz&gz&gz&gzDAu2{zCs:CYgR=sGӀ$Q -9+R 3-C0!) 2LwZ D佖豉hj i87Q,vqJ-r0W_^y=AcG2Jiia}9N`.0 *- s2,,VJ\z8j6~t]~sx.| j첾3ٖ%+ߪwk&Gk,^H1 N9ťoBڪό ^`A ƜV؊![ׁ5(=>źE^N-KWƾ} {$邽 >OP fh/#Et$LEt!"'U>`- ~ ϲo"88fSSP |@N@0׆# Ҙ6,l#Œm,10BB23uS/ɐN Y$53GcmXC2[ZliWma'0ޕBpWJ]|a txo^?ʾP YV]+8X,Q` e 9SdO[ܵSTʴX !ɜ!eUXJJPA3ReNЊ9GqOF^<8RUTbhXU}`v{,|_3:::RDp$$ä5i H J-0AZaLfd6lU^v+.ɻK.٭vyr_+l:o*0-FV0F ((%&2RBGpDa1VP8C{&CD k/%%\2NXYh6jMi(%Ia;F Ōs`^g09'.da{KgJrfx +fGJ{" (Qc'jQ8XIPfOAbCxh5z{#il1yɞT*yŮ7[mo0Tr,,wjEe,:uqou !}Y,C"OʠrʛSϘ8P=l:&) LTk%zȗsO%ojH#7/7I+9 !U bKI_ö3;S~Pvb/-8,s?8D'FD&e║vRb{AJʟKsZ`0O/)Z9t:vwzU0^5h]@Sbzw0rO6~ͪChY@Yٓû5=/O?yT\> Ӝ7M2 A(BRpVPubMr+[i?jA^ddN /+[ 5+30}tSOO"HKW)̙׽r2~Vr~ϒ2Oz$ E ? ~ga:S|Jtkg8 Sg.];;dX'K+kn#97:.EA=5cX JjA4T3$ݍ:+;\J#Wt,C%↝|; v8_5= pgvl>x*?]> b,eO=0vԤ,'FjdmzMeD6f(9Qv>yFTS<A~<9-@xL"6H_MC N7/FuX΅JFMiRgxӠr4%,M @S62-i,J1ec. p&Yq6|6ӚspT9P076g>.:rޫ69ĺeIzI$ 8\Y h f<=0fKFL}m1YzȥFVѪ"ZUDhU*UEVѪ"ZUDhU/,U2fk v!MX[ QMdR*hbFE["jnGK <'?Fq1(F#vd)#@EDυdD)(NYiJfRT$>Z I3i7(\HƉv}]0!l+Gz=H} O JFF5T4|Jf?+!1Kr_Ѭ" DlFH/u@Bnk 5rP1pZ@! CY2qL#G$4'1c O2`CYYJ'HT +IPꤎ% % L ,3"GQve]:aQkK).J=!38Dv3`edP Fi (}PpLfVY%3Tru.9 ^>+**uj:4{57H|yJCЫ:xA.:/ݼO+׏&Sg*OAԜ)fHn/N@Β Yt_#<;p E1Q&(?ҳK`"(hYY27iMpU`=CɻDRMZ9\N < 7#ݺR// J*1yN\Lq]tj^۟gM۫vwtKD Zlq*[<]25g 1z>ω+<>ϝGs>ݙ<^TgՍw`@Km烹q߻^^Mh+mDݸ22|7'n/oMO(2ruO7unx'dژjB *S|=z{Pr$zra%"xt,qepJwN6 &Q4pM6i۟ q;᧟?_o| Ňpx)0N!<6tlߵf] tm[DS)=fa~V $/wOQ(g*O18=X* <;q6M%)kO6Q/ˣz&V[៫$)}T*eI~Ncm (_Y~0.$9D&XUP-AjՉhxD@El^ I6hYHoL id.ReKF/TQIllM($N&=`ŴssrҤ \/1.B9s-͠?ѐYw<)K??>ޏ%=Δ51a΢[:[י,Iōo:Ot.aw+>r y7Pyn ~Ua[vğ4.:W{nx*zQk{3\gTy҈p\WvoAc$NN],^ֿkD2U<ե8`鬝@k>,6}caAbq_ V<)r?3ȿ;K+#'D FAHDDwps4MuZo.Dg1x [6"eXL!DS.XRqNl$fsajg]TN[-{,DFsLɔLFH5l8l{AOI]iRR@FXI( ,^INI%=dt9ҤAtkk.)i9t8EI{ A.^NF 'ZpV3A8d;n@~UH%zLv8Dx ץx ǍeStYF'C|uA$?:gP&(gr*J.A:rQ9gQS b9+mxx츥 5wAY jՁGRJ!$ĉzʰ% A@2rJA5ޓ'xER$ .Np#%e?"c95:&'"Weasy4P{JN=ut.o+Px4W!Rx$Tmx Q/a+K׷RK߯e\Xq:sf\JfS18-HgՙY+j"f-8nsN>L-.ޛZ#jKpfFbr<\H9)/ך:=^Ev{_ 5A8fsTm{Z*h@V7kqm1^SFHob#oPR蘭ѥMNQ1%c\F.lrז5QWir7iQğ<@\7~ljˆ 8]Tx/P:( 6h9V2 4, >ОoòTo$%!? "idhPh:&)ñEGGBG+G%M2hyIeB m3%BOY)f©.tFʅ4 'i~^mtI]1dkM`Tݾ`öEZAZ*mސ֣r*zD,AMpN@rxQr%+c Z5qP zIe&d» L. JY29FPkQ!F)bLplZpA .S7IH10xjƆY {kzo Ζz~,G7Y:5&hH,KÕWnRdA!Jq,Ѹq}\=2 wވx:J=[㘼ȆDfp(U#HYo[H Қ eYY&b&P55L5EV{Aom9Q@yNɋJv'VF2/S6J Ll[G\7LvxBf΄_6hK34\z!#=@ -= Jx)I 2@-[,dZ=2>]X̔^%Az B$(", o#Ą<>%hRhmM/5>]qoZV&=~\ާ}}|?+f>ytTS->8[Mȑ4!iDj@&4_,/G=\i<a4ElxT$-x%J*NjnV1aJȣ & dqPZ"C2""kfMޔb+Vl<ڡ߈ٲHƳ&+ׯ'goB.  >ZLhlҶ x9q21  Xhf-'/U[Qwtd-`wԪR|JԆ@ef"dI4~*{KF}<+)˗l<66MfcvCSx0ȵC uj+@h}ӥk3:a ƫǧ >]Ϳhn7 ׃vs Uyxڕ$V=$M?&c* 5{[{&mK٦Qj+3mg3떥TnˑIҦ u Pd^d;\&jA~qYF=sg[:} sĮs,ܶUޜJz:~3j%<'dJUT`]Cl)h=nSkӣAɑW"z-Ўh % ;Td ęnܠJ:~P$msҮ*0%OAhô:iCіL8AB=|(&Xv>dH _q*qFOi1hs{־S<۬h d?v7??~߿}2Q )0Bi 0&`wxKjho>47ЦUo1].a={1n}-w!r+qG`ry'8- Q"οF_.j…jV]+B {_ꭡY?UM>$}/̪}>M9[?PR9uOJ4ӢG ]vbD1tim9ti^/&~I JYK&uQX!NuD2Nw^Y7wMթr2ekH5lk**ٌ5Ƅļ?uΦr ӕ',6.iG_; JoTz`ls GSj~Byrw5G)|RKң3+ȃe2|uAځR9*+$!E= 9]Z ɋnSq` J\gǺrBm!R" Dɢr1u*J7,_p{!?APT๕Jb%)S2g M&U"-{pwU)ǃ%1{qgbQ$yZtOEE>%;nPO$*@6>P\qytTa~Ӊqs2.HVvp|25[ eRȄH9%ĠK);(yAcGkco8Zs )V<;)NB}!a2/aZՀJD96j4/V\ev?`p?lT+أtUC[xdZKn %E{Ђῂ6*b"AdTS9oi5o \o Z8iRI)o=jT|Q1vY>~?BwuZwD9ozwheȕ)zi~lź?(e  ZQf/1:KF*aT*Ng'KszjO[ձ'nUh~#i*&4<ʑq_r+]TU,cqj`t5~z${2vMuxk_2MkrCb zAyo@Y"1Z(Oup=S]$}sǣ~]ĎD[3=Pl K@,hs ^kbB|(^4v}[ ]]XZ|3~=U'sن/%HS֔ѹH71q[rNt,:ȹ,UQ*KN]-ѵQ^&6Oi*vZl?kPpe8xGpÓfEF/p^:HMLXVm ~W$ε%tUA#"0Zȉ1Hja\,=z}E@&)ZFE5Lh}c Ni=zi/a{E YtBSV9yeh+wNJ?m4JgC)iQ\; B:cJ!,=W%/9e .Q2Ȩ -9a.@zAW9 Ns9#-<(4%^[⁧Ӣz^# I- _auM4g Lڟ$9¼x)}u^`g xf O+d0Y,%xHJET' ҙdv t WI`c% Lˆ1FF QlMSUAFZA}^'i\vI]vv+MV`eö 3!-aGCZKrS4ɵgo#Fԗ\ށyKgwm2*C Kq_7Ӷ8 ;`b4zzA23l7g[>}oI!Q."gL< C/8HTg+=WRyAIudKIu^70MO&J%\e,U0 #v脝JRVvw1[IG(sj'۝ t-~#  @3AL(gG 2mK諍 ."F[e*͍8//ՅKYr6 g+CE#3d;u+8DI"*VLɶ /FxdIZ'Xx9-iuf;nNn.[6XgBszt݀?>eFP.&WtVfYW%Ҟ_}V.э=j^(]OcG"[RϦyX\\3ќZtߜ_"m[2'}~r+ylsuۼ[YL)#%m@&1f As休H8P¹TҌ}A/4OXn6on h0Zٳ}P\p(5Uy#_Rw7ݢ:L{NĈv!Ă2S5ZN,X[i-zi 5=|䇝xZn;,w\OL6FeѮE ]T/Qa7Eo.Rov%\pYUfOO^0U JvnQf5 XSef̢E1ϧ+u zu}f*Jqb;N_qY.rlؑG:Ԩ3)x.5kӐ W1ʠL?bJMĤ c~3nZŋe]pXNN :}-LM<@y7(jT8Oy[r^gЛJ &s IJ drH -2aV+|?^yh㶷$D/בK u{<06>{3]t{EׯX 9yu ?SJBYb f!,TjRB$q]H5]BYH5 f!,TjĄܛrywx2$ʄ| (ø?W[@lOv@OI!AY$(oq)A`[1Th\SkSgW*ӻ|3^؏h$v6"I(V2`d)/-suXj9ۗv9~޵;K0pqբ]-M%&lrUhkA玆]S[%5ٖcJ|^޿+hyV"Uy.OY.ޱPxël3ɨ'ThBW~^f5Ο6&‹N1VN XŃaa7ӢfF0l&7jI:|ռ3:1@xя*0zӏ'S^LyʆP]~h }G Fga]6)¸1~CbIMSd5Da<~ǫT G/q`j6-;wϥh9"xI @w,WN^^Ö$PY+ʆiwOh}a9S ggW,:>8ԟ /2z*ay*v ֕3 AT@uJG{TBZ%*iJZV*iJZV*iJZV:WtœIZjK*iJZV*JzV*zSH.KZVϖZ%VQ̢%VIU*iJZVr%VIUjZ%VIUjZEZ%VIUjZ%VIUjZ%~jO)"?#قxP q2Jp䵦VDChCGǡH%s4ir6ᔐvR1T0> 7b'q#nwJc2͸v <aYDI̎ĠVw_gt=ڦ{ʥO{LmB0e9ZfΉ׫ 9q})X/2Vk9Em\p-b{ȹ`!tY^=dR:l0+-qk'pxSbe!3ͽgUP$Z5!s&bbIb.*N$ց}2 Z].ֺG,ʆN"E NN|g>2ƛ- -`ڱ3 }3LZn>LaGCG{h |Yfz [[~4V(k{ۅQMB|]\Ju1xHkiQ~>VL3طR#-CQ$T:*rm%8 s6q"<1QRROIv)J$;Ibp4d9P ~8hD￉vC)ͭ X@T:I4:%h-njh ie(۠w:nFSg7-9]wv%/%f -͙(J;8#n_ ',U#.i,NhS2\hF4kJPc)3ku ֟ā',Τǣ1U;/|YmGf˭'d;OrԀ>NR}A_OuB  9 c0X4^lNbe ZX78=1`U J52B1ȸ3UxI.:EKTjAǶe n;5x-zzWӆ7_%ʶ|*mIvE,dYŎފ$|^oѷO6H 'nXNP>a-\Lȓ VzVk"&6\e&HS"&2q[.XZZn Yjq4~LAXjo7ChN2n#:5{ YAR"D)hIJ#g[| 4 Lѵffpv*T)JSUsh-Od*` Qт"D hr)y{>mO*>/̳]3 Jd[Z&3; DĉIXY LZE"=_TG>"(e0al[ka)Ar@vN"EUC`< N8J>&{?ƻ%0765C77C7hQX~ ׯ'<_=y茷 ɍg8B2VC.&v:rO'9mpVσ")9/y [g==]_UW6?dS9_IeVF҃9LNg.U}W4ڸhZgGjP9{]xR{ucw巟}ww?=89׻Mn ;4ѣYGGS`M+l@TQ꯫ra4܇:/"rGA.pE?ojѾh-A3=c#FBG- qQ8qHZC*D`0MA00T Fҡ +*<F8~GBS0aы`QS)9dhl⪱j&bɲsRY͉geɚo^߳FsG*;ʝQL+N1Lkpt%͐|Y_?F~Jve^$ A&I\8}y|v|c';LXhddkQ,ec #EY=߁!I"jxT~8!,ȍ$S{+%R.wR\.rVLK,f0oOm<Lg7F]-aNVv|ޖq7-۩\Ϊɶ %ZX %~ńtH93˱¿|5oQ]PQ߬x?Gk~]03qF-L`xN0+D{##a:0- 9r}*$,r!|tΚw4H&$!E#c}9AC9dɘV!!: $H'. p8K)K{59d$<@"Ǭ&M`,R(!zw5Nd'@DX͌{0rHRxHMLc1>9N8 Ν-Wsz%4ncwM+? [-)) %1蘆+.eQE lc!| AI9I1]Ȩ"(n%A %'bQ`1 A^F;g^PW+ MYU4.*)4ǃ+o"Xys͇lTyվv# *a-FxN8R`"cpL F 持U &ȟ"fY9I#W߶a5hk~*{3hp{ppvg`SQ]rm}e} %b;9zÜ ]R0vY4a^ wE%K>yO+~)S.}]} W ݆Q⋅v-KgP>TGo Wzw>L?O:SqDFO-mMےڴOpV-:ktTǁT3TsH0RVT0ǟs?J*5Z{e!ȅ]0#eZ[n0C8e5T}˰@Dk5;H@5dɐ91x{?qNo>w({˛Wo^#+7cx ߼z.<@jH~ /|`%3!H\? 3Z+VؔƖ[ FfK(X%6: &G u@p:(-'! 56BV+*Gd̽hBR1tYEEFѪt '3WN ֵ1\Gr9 @g!ux) 3!By@ қB f 2Z //P a{ K ^NCfKH^SF:[vHV1ovq9Ob;%`)Nf{Ⱚ3fRu :QˇFnlZ9uW *nLBN^540^QQ@JWOa8'Je$NͣYU% (ٞ_}TC75/<fڻcgEZϦ-{>irWz!p>5E5ei\~î,6?mX#6 grԿRK]FГ/8=DG㮋eRlͬqlGӗ@fcM2h!_F,B0#BXYL^jʈhA #(H8Hg_y_gPN:M7mͰj|梙﬽^=6/!Wc:EYe rsrS\g)>c21E4^JJ8%Rb%0l!7idU@;ݞ?en}mhA%oV |SR\\V{59 z8` Bq`HD5)wʱ줥t[czFfZms$MU&crV\{^\jQcSd/`D$.00%{nB|,nqo Y.96l=m'lkfSpMjڐ#fr^#暨&Md`Z}8'ϝPiaNȁN\Ռ\qWBErq؟frd~ryڙ rȯv;y-e$:S*ul7s Lpp{ws*͍f@6:K!?BjO|od-ꅂ+\bq痷9b4<0 m*CA ƙARL]XNPsRbN)$s[[RAG,JH|:>smoGnN@Bhz|V`ir!,FEШl?PVz[&\tt$t4@I\$GW H {`F H><6'69 aD|=אa5i7$XQ̒A,.:R(7R.^->h39 *X0B`D$4PuCj(V幤D#Y*R!,cP6F?l#fF8댑\9x(ޣ\m^G+JT ZT9)V&3OA0; >SxB\Mo*lbÅ }`+y .7k`ɾQX*7$^ֲvYKMBqXt_ "E,!]Xnu '1bePʏ';t|&&VQ`)$<3@c NV|/37פN%`;g}Nб~tTͰ-b+V{?+~y{E- 1HS0ea$fBpEexC!ЉfR_f,!q И.)aaBȳS3poGnΓyL_B!@Vltuk8gnD/ A=>:Lj!%r1NhnȉpI[+җ3KJӘG"h@QAsglrD`p'9%x-zrŎh{g+_CޛMyou"nb,dY ˲RŠn/z7^},s$UhnAiJ^P"敏`B܄qđ)ԲsyjN:R`FvĀ)=N; @LDK(F[ l% 3Bs4h;y n?ܴp/LB7>qi֛&CCKv uԛ e@A1Aqv4ڊR;4.$ ǂ - 1:DS.nH?F'fנE|ز:@a⫬¸3n+j#0ڞ8r+-(ثw+LE4J;Daldu"ap) |e&r2xddJ <(T)ڒI߇ %"Z(@>;щ=W b~u/. \9!VmKBPO%O@HRSj($r?eҴ JBp2` 1L뜃  whdR\)6?PQYԋm5<H&Rzʭrsn9WyW#ɍ?>-r0o}FoM[*( *|c>~ޥ|W7ʥFADIqz0> ɼ5egT!7>B9ag7BI)gv2nx6>CS-~kIUGQDIt6#P(76  8neP%Ϡ8S*?/cbY.]*nlDk*݇{K 1FJu*\k΀+=;iVw6~uxF1P{4/ݛ[ުίwU`12\Q..gcخlFowpo983*ކ'$tܡ 6ѸO8y*> =yxz09..:Ԧ !Vf$+BFNo}n{jOho2=~ʅ#hVNi[M3;Yh\*/IZbDD02W lmM`EL KB*PP4< )D%L{(!f:b]H"TTֶ!aO/dhJT: $( XʂBģf[{ ϩ=sڐ@"iEҎ)ioZŽ &zMZEn0]MŻ"J8̉aPIVJri^qɰl*hXkrUZ.c̣R{)qp-E _b-M?Zco-J8P4E:$cqv$@֎m@[Cq7vA%u&a`/F8?/h)R$x7¦ k"p] U~ULDrXT'/8(=Q IJuEŌ.֔z.hCL$q'wQÿ3&H|2 Yݳ}Mbp)Wpo[u7 BwAl7ry}?:@{mZwW?{7FM;qΜiw΀MX>0yQ~yS C0뉘7 |U+YX匕 0_D%;N  -Bwh$R! ,LsFAe,SpِEPRq: ЭdF ]=]J }'wvN8Okk 4QuiT q?ZZkN92mk.i^*6&컈bm% 3Vi0FD <To9IehɃ̢i+ #DZyHĚ4RaoŔ|47Y]+DY'xCuϗUWnC#6m2WnlB۫ε^61^QQa KϥYp&;*]ّ{ඏf]!cj  |yG+%χxK}}}gEH<8x=KqY\tQtHbq״W$VlmsFvSzъ4c-qM)Y,isc**獬S%K%8Zˎg=eqz8lVEVl:뫛Z̞\_B}n7r&B@Rt|5 %p) bTmAL&ʌdXGhk1LY{_otm1}j.tQn;,ԮE AT'M (0+MN )XTL$H> 5A)gRXRrT@ dZڐYNy'oarE!8ؽܢ5N5gZO\vG.k]$sd,y:\6UĂU╅b+;z.?^WN^Jw5YKj۰yv7p^߿-m,BO󘏢庤ֶ{Y\zㅅ̢ȲPN^!㖁,qe sbdth&Wꍭ˘7>hȮv611#16$ SNQÙz\Kj1Fbi= T/D杠:5Ko\Ѩ:$GU*ME%RҐ brwooO @x[ F9 .y\)D"қ\U2SG&ŭ^BIzđƬq%{pu35CwI]K}|ZDR"фxMТjbU FSd歱D/HL&X%iEVEK(D.TM.Rg&EZ3&Jd ǹ"}j!aCT }M1lW8%cm-ZGcs=T߯ 8Df(DTeEUQND0!!"ńܓ iKؾLH{&wA0,|T7 ! Z@AFt<|hW`C@AQEtS|h9e]@O-٥d-O rA` g9|f1"K#6U84]+_`l0Vof~z,5Ջ3P\ Pi|O}96/߼2 &|ŃSЌT5Q5 s: Z ?#`8 9Nұ(CԔh,~zi"dg ¾!Ϙ#| &U0!9f050FЫ+ zwո7^6{;B l==s>SWK` ,٫%{dՒZqddTWKj^-٫%{d%2dUP%{dՒZWKj^-٫%{70kf G ozqjԻ,KH'DB"\6XeB)VT Ol夳Ub9/CEy.ޡ#dJhiM\:CTA4rgy:e8"LKZW|O%?\q/ܼ߬>.8ַ ⾞L"Aw'ֲsPpo˗pUL]q>YPIDi)Lx&8L2uBd$2ɬP*F=gTȠ-0,WZ" s73Z3)Ӫ;KI<IRf(NBz(!Ĝô҄݁Z/1cF7NKW% 409 fx15/ߡC$#o5@G xft=»w}3}nqJ'v&r#i\!bsWfxdO5a [ϟ[W֛t:kZu٬;Ė&un7M獷w>Iߡ煖C.WfwY7}/-Oďoa:*[MWܔ99_ZCTKVl)i7bNX4X2&V"Q9oU%Vj\RZ7)7P?E+Uk6{p1hR2t.02|`LbV%^tyjt59|oѴfX>LҮڦl//adY7'7{zLm.AV?soqQxskͬ-[)ެ+|yw5Y0sŲ.WGffcăګ U6Rۆ]Dmېq'v7&1Eʝ[qI Gɿ"Bu3:3Mc|&fr$L`-džI `!I=4^\5LN1n;L:6aITpL&Y=~8i^;BehTX`jxYr8vq`(jXvǼ-9NZ r~wZ! #S.ȿ -MZJi *9.XdeB&z&6)&x)\Jʼ}Bk>L&j{*?dm,"Km u{Wlxk(&H~I2@FmTLqV)jDME%RҐ |_K;m[7'Zdc y-29h)Uh6H]2j͘2UZkq )~X*IڛfXG,(Fq{n|=T߯ ۨf"z݊\i-UYQULHH1!dBڒ}ۗ iO„;%04 A+1A{15O4 ,{(MlƠA(" Tz)B>42uA҈Q @YH hkRSist=ÅY^p 9s|pStϡalB\cۀz3]0SSqbZ:^2ZfJd?+$ЪKHMV{^Mf3hcI <=B:H>ZfP s~h03ih[EYR`~1c0J4iE u =aFi"(%th1QsiԚB18q;cp`r. ŽF\Ж`I-qfOtk2%]_jOOM;w}~ uF]OC-^w|g]buN;#.DpYРh]D>cQpOl DH/uP! `&W3"Z6L 6dlGCFPOc2LQfKRퟗYwYJGT++R+-*2I,(xeJ5HQ%A8DPj$";蘒H2S$ѥ|1\W\ty׏K|W3gVEg1gv]^ NħΊ!}_CTqF 1 ; xzg س_H*y(("I6q-1'7uܿWP2OB ҝ_j!R_ۛ_j)3>Tq*Npi31zhKTFgcB,=Ӥug<UoF CbΈo~u/.s|nDMQ}8;\~ #8Gn5Úhq|D S'}!M?G}o-E[Cxӡ m|Ʉ0m>røelژʭOݡ٠Ӽz'o8͵x, ɝ0l~][yO*UV/̆1SވH3>7Pwv]Cbiy}&DMOa$Ęd5c>)S[D#sց9)ٜI6Le^US8L%‰gG#L&e\cp9qgrZʦIe&YjfG+lxlG1tis\U◎$-h=~iRf)( +;l!)h "ƌ*U$oM&scS0"ch-\ՁsrH+Wu מc~]C/3NE_Ŭ[pbN=ft%/9rD8 We=@[Ww/3el3uD%| ۗz"*9;9>hS 9oA_>)]I2n΍>enx7-KUz _|Â#F!KEQ1$Jw, R*V[ }M.Pp_Z[*m]io|+{W£,{]{VߣWY]}᳷ڭޛ_=1!x m77gt֮PM6f㲛+ʹZKA<1g5AvKA59Aw=z{DxrtEOݟ{p|9 DX UQ'I\ 4hT)4m[2'˩ݛqn3Nǃ'o\|5s0dJѹ-S*o.U V"BU6=ySHd#)Z*ZxtTME(#o ϮT,羇;EɲܓWy38\R䱧pp>vW  3bl9Zʋv|rBeh_U*iCUĨޫ#E'v" +chUt&kڠH߇rSJz-k8̜|Z8,2[ Zg-ʢ~mg}xˑOO?v >88zWl8\("r\6LUmTN͝uu`h8Vm#?VdcM/և){$bjPHS%2Q ϥjV4jY'Q &׾=+e*ވI[PqUUw%0`96s1eQ+#S2'R}DxGNJm2,MF̴i3>g Iѿ vV h뙼[Jԃ%y?ebp{5+sҗYI_T,Vc[w߶})׽tqq mg9ѹ%{|vwUiv `K9:PbU~jE~38ocxɇg7#_|'$sOv|-rxS\4:2E;aY ||"._<gNϤK_j15^^Rz}99 sy19 k)~^k/Wr=Jwm):,ye/S8K[Z?2je{zr7IJ Ot kw5~n[:-\k v\z/lao/^yp?py._B<|t^SmDVh^aC4}8:?2~ZW[n}@YP_uWho_Q{mݻ,(k+o?y[n/t쓀ݘ/9>:*z-.&-&ߍҮb 煏y/Gk1;{ր ٛb"jqzUWǿ!/]F㎮űa|RT]Yyc>ޏpJ>AAx\T SCY#؜  [k\l}&kҷ*]/-EDzXȿ|80e/Eݲv˔N>| wGv V4E$`wR%vm_/f,i"]]XsH&s&Ūrpbی3.pՔjMxcԂco5gkg,.iuKP}[t! XޱᦲY;ZpLpYGZ;Bqpm)Y@.GޝRRkΕ9 TMsHIDR-%$B"ܹbi ;$3nc3kD30fs'bZyY5OZʇ| &h`Dӽ/rɺM(sQ6\M;eXDen*B?w|n4YJj)x\wEQG' e%E*g\KV1D.gC?gb2Ü񈝇N̽t%}TMϤ "[R;Lp1'z4;֍u䋆 О@fF=@$=:)2Ї`)fC"Һpց># /Q reñF[[x nCg6A}ƂՅHpJ$*)!P6YK!2) QH_7$]|}5h/pC4*J](_!\aZ5@ @~1 hcU4W.ܖh @;JuX/ LvQ*KF$={xVPOQ,2\.dPn  DPibt~hr\,jUn MD!c12k&ɥM`$5zŲW #[`4jM(N:g>b!}@Ew 'ՀLXBp؎棣ZjlmH7 l!߮-z_~yBCЪMP7dnIaP?tF1۪/e[#"Cp{_ (`I/F ԇ³WJokT!o~T4:>,'׷ w]j 7&{@$yI%.RPԨK2dkISpzK?]M^2o_j 7F_l15?gI~faev@{5w`Ge3g▱B,/GqǶT ]>[9$vοco4b6rߔ:\^߼\]np0!]!HY F >+&=s9s\`^](y:JNIG2DI.6g^agt%앬tJptʜއya s/Kwe}]Y9ے-Jm|,tDc n:?G hF&hB^VZ[wɷfb sIK7#PR&)fSYCbLgFXq#BM\J$?4'p7UWdI+X+LXRaJ=КD־Y4"`pɼZ(nq`h@i1'$H=Ǘ<2g 0Qq:4-_\1Quvmo=x'LB׺s{p`:IQ4 Y6g{zsrPab833heL \ V2!W4CDuk~XV B4M55<\pk@;p"L6̋ `Ǧ'M%lvorrn/!iYnjsFoJAdQ5h+gV* (= ܃cC1S R"Lj@Ej2%̛?\͂i]t2L3+ʱ5.%~Y~,j$ P2V߭_|,>hvU!Ӱ?9:}\۩I:{Z"l -׆6J{Ηf:jidƧ0/Up`WKzbl7Jx ܄ҫO#=YD7WOman"-Ip DIM*;^I6*8IS :F9%x7p8z_<{Wjӯ`N?xU|F|m{te%kbtS v\~kB'#3?+?i"#yy,JwY֪abҙǿ*݁  ;݂Ғ!˴QL_Bم|"R )΅GBp!uxE"߈a3{ύLu%SKgx KN#N"=i*tfv팜hT39'ףa4lvaU^Wl^UG~dHWy\-_JPuèi_3B ij-[D0Q)%cA ^bt:}@?CG(Pw *>&WtPi+g:; hæ[73y> =Xi1 :T0oă £Qs'$0*-2wʨJQPy*(Rim.Uº$?PGp RUPOM0 8ҐX V4z*^a\[.Cܧ!Xx&!f̋'#`s3HȂԭ նz_f[PpD1BށR^0uRFTL=F BB8L _(S ?K }]f+a4-EF!ZrI<'E}n5H5ca>?'q1#S+zپֶWkA^QIdHk-t =q;]ڿ34NM)W}Œj*C2y=uɏUav&sb͗ {WRۥ Q1~Nۺρo@nI47uC6wC6hYށ0 f0bŇlEQWu6Mn+|tf2ҔP|?ץ%U(Hjb؜9FAӨi7h7vοﻷo{w8w?0`S8Dc{~=5g]u ͺҵroj+> [ __ ڃt>5#NaONW7_A̯s^UoSlk_E>jG6yW]nSL<o_Zk|m6LJr'FBG- qQ8qH6&qT@2w4"x,*;Im鼲qigmrCTbᕧa¢:m]>1Kl c ?Ի,=8K,)G󫷛v]X][cR;ĬW逰  "7u'{݃dMobm9i: 8oy` 09l2*Me,ec #EFCRE.(qCvYI(aaR":wFv : [h}ՌSͱBt'JX*+0\,Z^O(''u&g[ZJɁj[}twHD J{ugDGMk 1S@\{@Dk:ՌB}[aR>RL@I5Ȥ 32x5sf"(5cgl׌[)хqcuX T)JcLSt=Iúdr='y4s Yc!(iS r.z%R%T{'Quhd0 3> K(Ij yI4R:",ٮpcڝqcXk묵{/N~ :Շ A KZ-aO b&VQ;Ո<㾾>mn+۞OH&(iʪ/?m>zbKIx(TےcN}B |[3+0.M`,R(!z-ۀBi0"bXD2G˺a䐤2aHMLc1>.u06+n'o"gǶh﹛#-_,^y"1"!E@P"$I0 nYTQrE sݖ5AΘق.dTjɠŀ'bQ`1 A^vYk,G…T\YU4.ʲ*"fJE JFc&=jAyru_ >G؋8S1cQ9c`WfSd;"4goݱ&kk^l}0(!d:Vt׿l ؖ U`hi)1H)h3tꕉW&^Y`J[3p*<Ԛ0w3KeZ A ! ~1N8{p" RRDpCLDK9 Q u]3rVޕdЧ>  v `QHiX=MxIE)avwuuWЉCꔽnYpx}cyxC௧>c? øu4Askڛ@T:I4:%H-Z&ʈV( iXWۢ o8 r/o28U%TA|[37ZZf'GQNG? ćLl[KaU4v!t+\{å_W%9:o˩Gu*u헟_EjJP)N@.$f@Yta#g["g}G\}GW?\hjln¹hC]cp?<_f5ZS:nh{}:<`M'+}싚Mx|?>KojҾO]&^7GsR#ZW|Wo2Gێl#̗<XX8DL!L&D fVluS=F{Ugo9NRKY???dX|-JAv]=,|LrL4š^_7_H>=~on 8");p(rģ0TUv|+\:P:,NG# ӄ)lAdTQEPR J 1B|hGNhS[%i-i7rCƛQ]9#S/_s[O:L5ߥ2%ॢ."F[a*͍8 //t٪GV}ed`Drͨ5 %=c0:ɈX$b `M0VS1%[^9~JË霶I3g1uk\'"6O@hy[=d2*F#ݼOy?1|voﶯf3!D0rV,ki-OwOކ>10 7}J/g;_n;yXx-YHs֜lN'-9. Pgj<ꛟ7gL!gp 馌D/dVpĘ1͑F"T0B R^=u僠8=%Cw9p4؈ԨM޽ 7?f!ߛ[nQN Rr=cbDmbAƩLI-jXzM}]t7=eGd6n;,\L6FenC/ȯT'MvQ`tY*7% 7WYxzNOƀ=6 7y Y`ݬNZr*Hf*ϺJ>`U8Pleeô^%N6:Ȥ}D%t\vpZKZne jŬ8^X4j,o, ;·,ˑIh`хcÎ3=o3ہG&^\mXbOcsmTx&md`6wq8NY`3%IE_/E_q S`@y3(j\O'6N 썾<~l?Lu%AA G_$%e 29itB`:ztL$)gn:}o>N_LQ=W$T^ ?W\jgZaSAjCQeaSuUW<dz]Cjϵ`hE5gW/ GBLJ3eEn4-vLkB<.hMĒ\T¿3&H e0S: bH]͆, XE7΢QȞ_˕Yrv6NJ΄$bu4tX~n[J&NЏ`/)0b9ĭBB=)j$"7Uf|O]|u IkGhA7:^FowwU^0KkVu&e(zQbzQer9^TZEދ*SidE{Q #BLBILhu\]L%U]Cu%т\ t%2w]]e*+:iת+ XAW\/E]3WuՕK꣞ 6Ge^ΫL%#U]GuŨ\\]8Xl`BoorOO7Lj 9|è=Zi2FTu%JKRW@R2F]j5Q]WWJƪz*_~bЛk2Siz/?Og1MT/?sSD,8i0$)Uo!%UTĥFp DDqZH*9)րUPŘR)` m8'%y (x2 0Z0 { Lp<\t|#_×vb$5ˡ,Cɠ^jWi8O n­)1"RZX" q ryfuƺd2t y"6JPFGτV()5έ!9x ) L`hRy"eD9 lNlNP&c{ӟ2@fw%5]Ŵ=,|MSٖBLS9i#M Ap?‰"۷8H}GDP̣0ԏTv)r?W}9-I&DHa>(gpT%ädov8aBʘNB\$N@$ɹQ :O>,.Wb@C% Fz0[JŌQީ;u-/Fok|S>*<dDSl]*S^*mL"bıHKf򝶻CRu^\YkFL(KAqҺ$ NFl"kkH )٢Rhc"Q5Wcp0u# `T",bY#T9{.˗_&׶AGݝudzomCy{LʴqLszvu1g1ukz04s%XfȭMef ?2Om4k?G͇뇳q5\i-OK4ͼ0r;?-nVw-.z71phEy<2Im,BtG|2mc5S6iy{Φ8wBc(N*z)jWnfCSzyu(oEIw Ǐ'_ZXCIau„x.!I aaقLq2,&IYǣ)3_`O+s7 ՝7j u[ :l mOW6,l꼮t.ЊsCkfЙ2r%5}ped'$>=HI{'dֈs,"'ZD_Dzj~+R:cŽ%cdoc*>dFkh.3RE, MmXSwu`s Fr Z]aV=qV+Лea8%,u FԨl~Zs[8Vtt&tx9ratX~n[J&NЏ`/)0b9ĭBB=)I"7aO]|ku IkGh?A6hPFoww5+X0KkVu&ehi-})^E?`+&mԈGːc"I0UhD`nCw*rm%8 s6q"<1QRROIviR=F|1"/0; +e< zݣ1C &ZG46`mIBS$ь 8fD+`fO~Ux_(-z{#[4۳N.L]+[K(vɽyV< džW뿀h%}z[cx&MvM;NP"YE8RJq6{FGg87^ =V"P&wXKKP$%A_CuB 9 c0x4^l,12\2`6NGqL:.*ހS鴳FFC4 o5EThR R-Ubgq@#m!zwU{7lO1mٯ9y%RW.҃'[]^<'ԄCZ7}& #KIERJFS MedHL/7z|gboIHtUb/\;ra==r;} yn-q s뢨Y…A'aǠ)W2$:+?Z$i{%iMޢrCp gXK].RC"jWΒ$c'VJ& v>#AlĜ 2x"SJAi>MNЉR-*B(]ɝ X TޛX.| #,< BDN*pRSP#TZ)$?"JBPqTDe0uAeL hyGFԮ"[ )!Ϗ4JpE\H A'jx hVZ#H)B,N̈́h,W9DYVWqTƛrfn$J~?L5?85J(3%~Rs-7 ~J\u7en7^".I֙bqO˺a˻a4+,PiC+|Bf.UL=s=48.ýNecBH0GDGrOM_cOEzL~gEhQ2nӫs 7gy?WoޝSxvWE-#N& یm57khҵnsc7W6ꃘ=rf;~V W39ͤF7nWb[A&a$>~DE՟2j2TǼ3rڀJhe17y'y5/ ~?kM<."݃{$i21lSDG3z`8j6'[G&WV]8LaknG# Rye"q\hc(qg@:%aSM՚LUk uZ[!Ύ_2(/=47|i]u84f׶҆Vb$mܓ:Œ' A8,qpz{KZuӛgC*7;37ݏ<[rӃQs<] Ж(NScp@ :HF%S69k5zcګ?% /~{E20Kh8i `J!xԖxJ4"2,oQJ"i  Ѣ#8ŸZNѾ"mZEŠ2K(A?wu+NjkrnAhrAuF*$ :7}_AUP[K?[RUz8%Y?ߗDr4aH^#, =08z i2\%a*2JR9E@)C>5^Q'm$R*&5rV#Vʓ`ak.ꖱPpX8(mtK"7C?N~ v_o#6 #V+A TETHbQ/zh$o gC 8˰ɥA#$&Dm_nՈ0\ j&뢶i'$0@AJ#uLVȣ72ʜ;R9J Iѩʶڻ5rVaO*e`DlM>DDFEq{D< {9@ZXԎ@E<1+I%h1 f\"opj2θCiQFIs0{( $y4^>~9 /ʵK֤d]\-"=w޺܂RLY.|}3 uZݮRmOV:{}iEe9nd-%,ԡ\%K̓+;`'JXwPgbԹGv|]Ѷ+EN~ov4 |wPW"N'n!$Y&y0IRH 餄E"CTO%o4ew4E C,Z8*jɂI$TU"3d۱ ,Zc:Y:4Sƀ ?BrVdX 6@uFpM[R^jc#~z(\k2&FXKS·oZW%&?AvHNqqH 5ۻښ^] :&(mPS؋'45c)$x(Gnj!(hBڹ2J\B׸hXqI&5%6ңZO'/)mnl]r)\ІH(ڝE Θ ?*1!ヤB'ZXe ޒSV^Ӣ$n}?T3q_RVEwbw62o+Q04Dŕ?>mt11aa:frzG׏]bWфJTo+9.1aq ,8Y!3Ř+T\ S|xV\\0֩?FM/7ݑ@ߟ8J3RXhdv";F8f/(N{OSN8C4%UYbՠo3gA_ ş<nm\QޮBB*J^s&ZF OL3eb"p`NE]6<1; 8>]jaʋY^̠U*Z0yG0JߺiC wI䜣Y#QAC*1Ʀ*17T~ kу3|hfImiR1ˁUU,؊coo)TUfA,G{?*įWT}X>䣓,?[b! -XԜj)ŏSHjjsjFчE WJnb=BW&Q*hvGpWsjZ?[GGrja+ݾ\<*ʈZ -]1t{ yo]╸UZVt%yA?**@>0@mx^BWjr<8Ϊ7'NTt1^A8--u܄Fג6<*ҤV_hJb[F>xѹec5Of7anT>C98:selw~*;WW8QD1jzʕBe'̲v6fy\:In=d чI$=0F'f7\q1:F[ynȕO#?=Uˁ,%PrnL AC鼑p*sI,߻ZnY()„Ԕ% `,0BkYTɊ& 9а )ՋY(aU=⨨}Ye5 Qj}yC7!߸k\+O8iσEW^?]oW臵2$=WwE䐉Yr,;zlK"іɻ,wf4^B,+ڹkAA6dWI;|搋N}sm_'AHh"Kb1AJ6Kb^vS&kAcEKCxT3 a@&V嚑eRh,kޡ `|vyuiFǞ ܄tGgB#2!Q)Ul$a(j)`E"LVe4:RDM/Ayl2HdZTP 0l 7Jd4 @gz '9/.py=O4JPUd0v>ƶi{L[-*@mB^nZZpM#[aBd{nH`*(Dg,@S7N/:)ޑgӋ>u}]/Y +63z7KLjCƒ6VQ&e@IBu 1i8'CR)A(}udKF1i hl4#gͲЯ;-3?[瀟j-CL շ6ߺFѻ5 /[nԙYG[(Wd4xt2,kPbŒRtcHƒ/:Zzf;^[J٬3$p$WJ6 z`[ %"lx}҇ը: )M(Au`O'yr. |jY懓0$~̗X.;Z8- ?$ޜhV]M' w--OT")R uz)N94"nO׬/Nws9sUI>JQch\Jr@#nyYp$JsQL5U}ȷ'aD&WL^_7mݮPс[_&7_C$vnhW yèi3ͮ>^^ݻU陟''~~?;f=&q~AEr>,ws.9N/7U߽a[L ŠwHvZcH`Vtn~WZD?e*>n=vJ<=!׍Y1K4;Iy.#u`դ8FUmSy2n5oG7eM-Ŀ7;NO̎bu7ǟ~}+}ogzZ<{ H~=x4Z ͇ -z6[kuG^3cNhgi%w"@Z~͗q7A;Us3'4 b~(IfPTy4bZ`u#f#]'u{{OG iH`s#);R+*%fᲊ:Y Ę-ކ&1ySr~N4[U kSa&6hJr訣NMqܓ}~fڭy}v>Jj >V)XEZ >2E_RD_;,M3MvĻ o{edЫbv5r197uz <^7sB Sm+ \)(fs K[S/.hQjclJ@$vMv>(3֢L3H% [J.oԊ|#r48OGm[T7BbA)J}IHX`q&فՔ].?1%x$"BШt@VEy (({:@ rM$j@T!&XѕWB1$]4xxI*wc͸eL%֟b@)`(#Ё!R!R*13 ]ibwYlm-5V;݋^xm1pc{;ۇrݡ$65VUi{9xun^\ѥF2 ^,$Nşj"ISTВ7Q:*Dkދڭ]8iFXit3Bf@'5_WϬ)ODOON'yu~n=0};+Co9_߃ _S@l۸σe ڂEmXZFYJ{HuH(R}U[oъPT 8αm}=)Ut+Ts֢#J3E:R 1z *Ȼ"u!J ^_˧Ges=9_D;jf!>S{H%f7GHIF #DC81 ju= yUHBQ|ʦFOJ2e,7NiÖH E{,$ Ql"[_L6㓝R&GP2{^CЫۉƵJ{:yzZ{ٹ w{Wlג"T+oQ,A(%RVY'uAl G Fl!d. }M5EWJMEhXT{8*e2ZdAJjEdړ( g .^>.\PT:eV4=9麷yy|;N檫'zAyaF.&@ATUf);)w"djwI-4g fE]J6uB@Ny=(-)fmEfGCZ2.Ek7]o}{d,$,DzQ0R٬sL2:K#;tηՇXB4J "Y&1:"D*Db:кut3r6ÉQ_kWhfFTFݯ׈F\W%>+ `E!i2KyQa+@F\9\MH , m{:`8[UT]if+3-%8uM3r6k3zq̈́|͸dW(E^/>5 tgȁLA"1 $L *[(#˽%Escч͸cW}C>|(ӱ:_Lȭ6aIۧSM!-pCj48)Sg̻%Jb L3.Z0&;-UOj}*\m "LA`bOiu(2ZZmc+lFj7Q&5(N2A_ؐ[n:Wia_d(}(*qqсȿe%._w_ȫ4C ֱ-c$*އBIH]Hoէ)Srf>CJWH@Hza+K^MJRN{W*m^:Yv~0<}clQ 3;;AwQY)Omk~XyQ F+$f1&)QlV%%B1BM6\.$U? ғsZF+k E-LI@ԺV3r6{Y'^>o[>ڤ蘙?} A0wiwGNi&Ik/PxW/uƬ;@ÂX"ƣA[6$zh֡%;`50tLai첄A&(S#KUt{C{X~#-81'mf KYGO@eTX4#L/5{to}㾁]1Hפ{y l<֗ אYwnlږ+|No9JKC'SX3S`;Hu[ Dq3U] zo86j/[QJcM E&KZI0|Pgzȑ_ ,Tyy͝׭Y@b2b}z?HtR)vmz3"\0^wJ-nhtet3yzZtˤ1o^vBeݧݴOf}Eh37ɆW7yߩ\LeqN֞zCk.9t׾61iߕG Znm~$em>2^UL)QGd4RZ@Dh|QD>߫n6EʠhD*¨M..Bleƻ\Ls׊ٓ/- ]G?/K-b/ 102,9\X Bg.JKɡL#Ӂsj7ݚq>d m&l9n3,9_^*}+?m릾|z$`\/)%H{{/ ~S)pprQ-E#[)*ެ ^'Gאd eKh\Yzxew 2nL,Uc)[6^ֆCDVmk"ӋAO ^˝{5~q1G.,d5o,-oX)qIXlG[ev']2BĞ3L"zmrձ@;Ÿ3#Q)9,"8rw;-x1ԅJku!:׾k<.|s<;P5O''|@.*;gT7DdK!*XR-%$(1'1;ld_J!5fקǷow=ԭF߽3¦U'\R(AOCP#+pI:j[>|Ltmf4˞yMN5iJ Y.>OO4M*O3*sw@Wu 7B%*=Xyq*ODn#]&>B3fClr׫>ڌڼb[8`Y[Xrvbth8}de"Xl`)c(VUQ٣wV'˹q$8LVf1B#ʁ\,d#koV$MIn,dY&e٪bRn^7揫ߧ5*ax ݭ,rS6Kc&z@&_aeBړ0!Em<2 3v:@&80K1(bݦ|6;a#G$P't 6p|dy!UtGpD#ٻ8m`)j+jS}^õyx~yKtStk\րf3}`/ALWW% u{/frnZN;Qən `W'YڧO7 겂rPJ Dw^4,W`{K[ZkxD0Nʋ)I)5*`3IKggtJ_ZJ]H|||H-ԛG`oRӌ5KY2\aNl[A"gX @ 'M.>ړ:߉" u=se WMǶ 9}э2/`słt>614P!ED.t6!N&`p4X >HB]J\%Z8mС'-tLBYCh,Ѳl5rv [ؕ~+nQ(ֶMꭰ[o#D}eR\ky0w5.p֐h<ِ:RC[5q'nY" ;u" T-$t5^0H.Q1%1J@rpі!"H&$nJq/ǖLosQ+ 4HMJ9et9 (2*+Q(C(?_kkE LEc,c yB:Zˬ@`5rB(F]xOjVLrb ^G{4")Dc V:epȴ\:OJ힜 YqQyWڅ_KV$Qq$~ ?_ӥBr1}ƯrPv^na\q7y V5^w1<q~JGq ~<2$kZ#1E1.8fgtwMݝMg?mזfqUsQKjL`ӳ pOnuOCt|pP Vk)Kw~׮U"ybV }pôhYk-i1gaQtOHhVv~}ⒶtFg`~̞F"pGËJ{kNx2(GbϤx;D&s@=]ݷ,4Vb'r~6=ԝwl]u׮gUJqҴxRFʍ_'7L};MIq`2w{'׸cRMbw5OŇt꾭1; - )|/i(u{:ړrtkΣS9 =yBd ꯏfm5تJt*]>IUFȾx>z&^VZs#g! rɳ5B<LmE*0L$#HXI%ټImX꼦?N'^ ,,$Y Q'V,7Bc9R KoA{tSPjU㉽jT۷xD3CgpUt%X[mEsBnmOȓ@1FFV䑔0"Jҳ>z@yxy$:SVqnזkr.d%PGnS18 NAb&s#w;+- ^jMq7.]?vk^>Csth#Yic~^]ldBg([:3V#1I( Ie廘(!'1B*\{1 Ce9E4]ܴI b l{2nJS1ɾMRg7w;9oٲ6'O4Eg-c=]vGSJFOU BIdFFlf zvp6$.]3aKU5Fc3T6YrA'RlI&]*!Lk8W2)ښ95fr]X3ՅXYb mէkv[ۺʦwӰ(?4n4| _J+klR694>ȅMV^$HSL˘mP%GE+&_@P?B66AD{ m`XC,|]Oq͸<];ڶֶb2 Hcj,D9PҘB'Sډ+١1>J@@r  kbbA$0hn2cV2qkW#g>leOE#V=5`u5b^#+3& 2hHY\%$Y8&> hO)+C0 #[9cI6@ \ŕtҼ8B KZEQ R;95Dړ^.{ye{ōJ̃(W 8FC.2-}Dӷ$ $:׋ЋqǾPTևb?}xQ)n:K=r' ~cyQq}8!$rrnib+xKܓ-|cJ (7K/H A+4 </uFU"XJK`/39ɲ5 v3Ҕ2֖ Hg4փ!r>S'1rR@ʥ(x펃Xmv= ݉M h A7`j5 3A69y$(#Im}l'>Pgb:RO }]()^xZ@Id)"7[U!5[N{1HAk*^J(d2x.5hW"f"Cz;!I }#;nc.3hk3wJ>Xc8tcZa/?Yl) @3 NA 7Y$о~h.T,-H"e^[sA>Y )Go:Ky`#Zf;O}WNCfe 2h&&<v>8K~^}0V4(ZFi<Ѹ?~%S{z6^s|5|wڟ+M5d| ]utCCkm0e+!Ց dpjT>Z7Hf;6J_TT*R7Cn 0x-A,Ut7 {L}`Nz#, UN?{ȍ_Il&$2Y|:ےGq_խmɲղ$r7fWU,V))58Fp&QS'BcZ&Zqqea4xM{f3aٵñ>̧swU6VX[%f#}Nd~? ěLlJn;G&~+u©⥟epKלz# ?~|PljEI$Qp0K9+ש9r&`JspD!pFIq=>uP'ism<Ҋ=> V|ofr ۗHɼG3LG`QbU^(vztyTPVDSaD.j[8"*=N=/s7kR*2gZ{q1@ wh- 9{h6fcL w+IXrN‡J/MXxT>6Kes'-)1aa ֘ێ1)!n+*8*|!2 nDR}}6OUU gS{_.jk5VRw`zq1سRm>уW_]`+&v шЂ ]7T8JA|a$I$ 88 @Ei3!T]Jm X31WR0z?{}0lGCvx(N [6&tj`:%J-)@n.'lRqhR+ q6{vAly|qW3o}%FS*a‚`',( f_‚2zXPR6, Y 3^V\T{ǝ3T7D5[02Dq tML qӀQ>C ":9;PfO?d73j/jeDTћHM?=iD!li2]u4j؎z 3H}'⭺zFj`{2䚽QWHzU-+O/ _bF)6y"Ͻ+9pxAdJ?OU~=e6^yy_jXհh),b4W9> ގlfWh$0㼦 ;Üޤ\x$ ثJ`@u>.>ݻ _ڢs,zFBNM8N-xK)&A(ې7ZO\%>PXͶhi>)Z8 *8cj6y -.yʘg:`jPm.m쯑#Zp\tCW5{hqPYUkl}s^0$J8ъ1 FX+12(^26` '1`Q 䴳FFC4R[V[\.iYcK(T۰7F zr[SQAIXhI@ZI Tx)Z2Q@k\ithiF0԰99-+(t:ؙϺ>{{+Q.1\V3㥭S~[ [-\,i-׌Q#ǫeD[\VXUTpM2kb2I$A:v,vzm!Eb\FwLC{un6+x>Yyh[EQV0A'ńASPѤ$Y RN\(&y 2F "i㹳a/2D&B\P`H Yqy;'u͔G/piř~Qv]]Cǻ,,:qtʎQ!:(Dd<8 @hq.\U%I"O~ B|B/ɖa'{< űwvKrT "wAHוXh-HŅJHRi R|/4"Hz"RdB>Q)nswl/<ʟ[fADJq:_㮻:AC2yӕR@vLaaWTI ]G/){Zh9rUQQH''JeJPnq޺Jp9(/oC 3G;:+[*E?&?>zƫKhʼn1FzWjC{59(5g|F/wZwͳ,]u|*oU>p%{%1[.g3'Q|=*~@LECŻ`$4#].--~YfBb_'jF]BN݌XHX ?aS'#zH;vǿ{)%J;Ʒ;{uTG'?|c?PO>Oq (z Mo` ?|yojh4_eh[W¸_Mq1S-ٮ,||}/|.6Z 9y+NAQ/' }|EI?RelP;䍘D,TB?杸 }1߼>GONdI.1b'e1cjüHrݭY@@blNn6:(]8a˚t))2ɀH QBgzH_!X}Xy8A䜗TˌiR&%T%L5š4,s͚a_urrPJ`:U:jg^mMΜtj;lkK卡Kn.=«/Z-?B9LS(dt# xhXAG ob_}&pAaD[U KR[b̊ .[0dy!Y]rD| >J1T(=]&0Ȑ5,6f? vj&GQO{:7o2/vċ"7]M{+t֍pκo9)~;SMxJ""4F(l %X=%L."m{̭T/P1d9$B4%[9!%o%xɲ3 PuKgi|8i@<&rB0P_dQ.kة%nJ:YM!CHW,(ŀ<R,X*k:zVZ6UCZ&4p)Ƞ)x AZ!%k Z-<d1ppAzR±g{atV1N>w\ SL/8tܚ}v*qS\/)z淮c"~FmִU"uRB1K/rVCv%ZLVAzcs\ 1Gе7\~G֭à8Hǁq ]҈92:q2A\&"lur*pP`FbB4W>6 Gw5,jZ@8*YAO8]Ҹ?^'lޏFM9z}~Wn><]O7r ~V|zܵ_ޣn#]X+¬vt=[뎮.5rvS׶{n:&SjҮOh.5pv.*d\:KwVd=. ;ːq"\}IJ teS5qx~{)vXuZ= I !$qT1ZޱyH^lA1 ,DYJ٪פETJ]A<"R#$JfceE&nEy'IM s=\KRQuWR'Kr~ Z|}c}iM^]혩8f~|GJqUT~B:1El(@-ee xzK6r̈́s"&C`Vylʒ BIّNz,=Lk9W3Tmd&BͪqjXXme싅2 {*qVnɋE:v>?̮'TˆM`#&Dr1'2@RH02uU̓"sӕl{YC1|*X66QCАp$v̺,!8#fUK݈fӴfb}QUFm7`Bh4 OIkma&ʃ"%FK.}+Yc\]uV[%"|JA+2‘qdGm"2S%*N K _IH1ùžaձ/x(@m Jo#ɨn>73KamcE?7mcy ?2@`񷿖'WnL72o#ZVԋ۸(ƿ_РY`ٷiv;voo9yn"["| )k YhB) INBmM{gdǭ.w33|ŲAsP(-WP+FvMY ^8D:R؉-2`ܖ, J[C [dp3>3W8OIHߴ+( ҏ8l8,m/qy\N<E[$bѡr~3c9iuL %22GZ[P!D3^ƔJAFD8 An  o4)fpnSB&1NUF# M LK{]`f',V81[Rh]&pٲc Ӑ€#gC 1]-5æݪZtb>Mզ-75~ߚUO98+3Ƥ̼vYfƂuD dX˴ KŘxl~) гzymC ahx!G$Za KyK8tZ(π0J>=CO)pgjyFdbe/iB8omT&hP`ŐFfkW>ϾY=@<A"6z#%VJC`ZERkg!FY׃}bG sO>lUBoT)1 ~I6O_r5ζC[oOqsёؑ&@/ɉX2.FJNGJ)Ƴ) 30хtZC؝ `/N&LHL}B~)g|5T [H2m@/ o4goTp uc(gJf͸E6)#U 4g7myd$Lk8~7r݂]樍_^h!>^qE,xuge[),&5i-k%3Og:=WY [Ey;Jig7ׇ\ћGX=~;7GoJQ~04b1ʲZ2*ߍU GP9OyCGe"9"sMP5YG@v0t^թ )%{`+2,I&8$$΅0fI 1tΒ=KqŽcN{ T+vJ}},כ{r|=un~zǡzi+ u(kw2=d:zGAX4]-6){skch)@7.&uTN8{Y>cR4sO"jࠒq^6Ф%w6`yu+7JR(IJRD޺ $k Qx Y`00k{/{ACW(![Mr_'ϧtC؞nHzi(H>H1q4^?kknq5%b3ݻB󾱚YDo,8Y|mt4X%JHetncL;rл0g_^DUY12Cp,%g29qw; BWybu&x yNyb7bmfwe۳sm;vO]/[E[-5>{\8ʏet]*n|?2lקxnpFn_l|Ov k~6ͻ=&1qe6}v8G]b\jweҟRO%{*m_PD_8ydN[MKO[]ĕTEZ{ouR[ՆK# U؝\L \i;\)A prc?!"'W[UlHiv n)Uة+r8*҂;\) -•Wv\"ct[MWY]{N,b|.eh>\M4'tQ  #n>+`ݻop`dyr[m}.FIW_2˨II|v6Ҡ/:gٗnl\tR뻓ūJ9Zi gOK^]M{Zv<-l{\(A@r~1s_v-Wٻ6rdW8iwCN&fK1։"y,9?Ų%ʖRjvǪb]^a?z=c\2NOuǏo=slϰ;z*x]j|<|zG`gTczy{iv=\`eͧQ+_iT.9NO7ugo`Gc"kl|hV"g1+t (/r^pFt1(T=z]6[(yx='ae"[k~JLjEgR #&&.L^G&X8d`̄(5gGB$S2*xh ,u<'ǒQ1|~2 |=yZۤoYh<[[l.O=ռܯ}Cb];0%HR`D j<$r Chg$!Pruk#*3|2ŝF-?z4Δvh }ŝUM(wf{%>N^x{3}nY1{Lں1_a?]NV]r7;bp;i;)N_FX@6p`YŲBҟlz8iUw?亻{V'K'eXutݞ7ތ7ʽFӌhR ߧ˽ؿ$v w?ſ/o.ﻋ/::F;Gۻ_wVnMZPg }qy}a⾭1ۏ- )?~0 Ufɖܻ],~Q \S/7V[D-U}HKB@|,16O GXS~}F6ztFRB% &s,:1EDցr$JHg ZBg$=ayM+>9%Ynw{䔅$#A"rr2uI&kAb8BdpMwSPgwm:=G@TiT=HOڝ,vΉWwY(KzUWeCZ/O_҆5ٞoI)4nօ޹ e(dWљFFyhX,(f=x3&μڡrzcZ# _:';[g&ة>/X9?NS3|tA"4VhlTJ=%L.;"_ N<[wܟwl=#xABjJ9A c L',^9*]WEf-vxud8 ii9C9"&K*]5rtshЏ'"i~. Z3E tC]76: ZedB?VA:I9I1ZJVdTFVcPҁf&b9dq\Y WDN"N`Y'n}psR\j.c"~F :^dIZhd2y.'+3𝌝]4KyYZxBZvurcvG%F9][|C.;XUT_xr ݂5XK yd*$)Z%KA)$ *xOMZť]\p>|]R8JjV3 +7mYwi[}ϯ{wpm&о*i{7ߟS=;0WY݃ұHu۟ݩk ZRΦ/jy6:mVکT/RqA}2au@(#w#ilAsZl|qexT262r<ѾÓHR*B蔽#5)nR.䩝q u:g{R?FPЃ&3M[ɢ*8Lج%8;C-q]M@ E oi-L'I 4I NCfPhbXg If **f(! . gO)"*X! ?@qt 29?]?u]SS~'{e?ݏ:*/%h#7L8*29+ @ʒ A۔ɤRyTyp#=ZL UCL\5]V}*c!tXxT,\PTmrS4$M/Լ{|;xO4~0}Ǔ l`CT@"H"s"+3$t Z.,Kf4cd2+ZbT") lJcQd#ȶcP39bfBĴZlG0%TPwڮ2j; vcQBاd3P4&-i. JΔPmŢZDZW6G2" WdE kh G,% )"hَQ_ 0 "V=Q8C|Fa1+5B")›t}c@ZM6Ĭ-p*":6F$w* ΄T\ihL,iXUFjlGoH:.p싋2.;\(%GV9$R|b}j#ɽ9px*xX;CQ~xx[v\=v}4όf^G ͯn.r̻Nqls6'm"MzCEk>t5114h$|Wr\/J4EtF_K|N[kѵ1FkH\&":!XΏjۓ%vsWUv!xMkP3|)ͲEs("kf Yi8isA,$cPUDihctV%Ҟe"B0rz!1x왨FΎcsu=ݔ2s ci7a6`If>n~_N|ݏSe}d-e8zHR(sLL -2GxA}`")^ƔT^̱C B0@ ? 0R"( }"M NmbUF+- m LKWSbf',@8[$R]6p0eU#g;0 C FJI;xGtgkFl+g7Pïkg6s2:kmehA6\$$c,.,]1I:YI_EgOAfqg9k2'̕,rC@Җ46{#k0)X >KƑkSRv>|8xw4SnjnOSoŒ/Ll\$xt#?_gZ+ <2Iy`)xd4&<Od]1Zy>œy2"j_~~9[V*9*Jfɣ@P4GW*EH1x *xHE0$f:uzb=q|Zִ;sJν ?vg + 6"tt<[=-[29H%Ph$.dzU@p`̟1NnNT5ط`\?^xX3J%q~O2\4kN\FO ^zJyF)][s7+*?K&@#UyrNmm>>nm*똖dR:4HJ,Q%PJ8 33 ?| X8 "!SqFÔnLNyTCmO?9W sRSG[H1cTkQyf[!%1%w{gMʟY{D}HcæsOURulCls~v=ηN;z&v`;uxϽQ9Rf$cL0x+)ס9ϡVM1h{[R~8FECr,|D>wun 6mQz6$w[SgyN+'r]1Ȍr1烃ߖDA ۆ~568yrdTn7ݏ˘ ~ bOq۳WKx8;/}].n]Rj`[u}31-#S&)Q.})Qޤ7MJ寰DB3rxϏoX*9<` f`'<Ӫd^ܡ@Z™Ѹb誔ҮN`}b"-%w sϾjdOؽ3cYϺV.Y'+V\Fe)%4Uӵ~nҖ0K.U;_WX՘L1DZ7bYl«C놁SD G=PIAEc0)?D~@%)DWxrt^&u>[_Xt$C)&#>c.bӵjaz-ٙ]i[_5zmA>'=A!FlOXFWΏem6=f=ճH?),WEF]^G}0Aqv)} yPiq;aG;nQtV֚, Rźd@#eu$.[ctvcԞ[ш PHƇU%-W'"RDQ$`Nbt8딩sr p_` q%}1^d{a\tC=PztNCHdY5-;?ﱌXC*jqA8(!8ZkU؉~N ߧ#m )ʟe1c-+P/6k X%n &Jh l=nȷugE`MojdCtn݊>̟Ɠ( ;SH&9d&VN!ٷn#J#ՂT#mZ[`)Ƣc\KP&QդBg &|#zvz-s{k_SڍSo+&kfVjYSI 1|DFq q7qӆo/ 8 ESK9MKm}*HF6۳뢷ªP̌CL](J ژOg2?T?=B=l)Q\4:po˿Q2#>O`#W7Ԭe U#r>_ !#WyM*#2ѯ+` #5[דӳ˗mWX_@~702Ccy?-97{T&ym>s--t|unYhn4s<_Y˷r};ͅŅj|? ~ST{2* 61JWG Ɇ\DEd" +2Zv5b 2 pF@OIՋMV d Z* sPn\vӌb!wBpXx)Q{O,䲡ݛɋ{_]0<1b ؀ bBK5A\rqީq JgwL˙]۰(9{!7`.Y/oW@_ۚJUqg&݈OO9Em5)"}zˍP@H-$o[mݰ&H9+Y8CfQ&`M.*Y( q!JY%q bwn<\:-\n~T_D\oB h|.JEQլp+lUT]~b,s-,Z)P|BhPܖҊ-VŴēt&8IKn܍_(5nZP\ԝqQO8ݢsh^#:YPȊZTZb bkz7.pq,xM;axy#\ ܋m17g()iTNOPWѻ&})zפe3wMJOSѻWXΰqݠ{kߧccS?mQwmy/w bǤja.H|DR-.&U+yE:QJ}!l-nHK5[|5YEUCsJ`SV6)*y8IǐHF - g^*8h2ϏbY$bW˲ueZ TD$Ob QAFA`R hR LDhV1OZ 0BbC@'e5W@k. z$eJ0AVcRUY #>;'%iR]Ԅ% Հ:$ ĹNb܌-1r ;&mp-C6ڷK$gP'<"|j'=s`}55R#RH䒸0lRUfak䒲! zBτblH( XEPB&<4kz]1iap>3r쿣CMmK?{WƑ /yC@>xm`N_b}J(P[=3x|tpL 1,JgNwW?ȸ(84qJc1F13ɬ`)$ŤKKK/hZH;Ϭ R@)d`ː"p0`A)=H+%"񜂔*m6Xjo,L+ ¥x34`jL']|p.vnye÷`Mf/&9>B0On6[enzy$x5 qR!'IN^I iqZTtXrԨ;y-Xhm 4wFm6*GT KIwiX#g?9N{ VZqMQvEkYɲd, U Oe3Oyǿ*Ͱ-(Mǿ1KU)b^x &LH}&<9R O„tYl#="`)FĔ)3<"NO|Mt4D{0;cʽ34aFhNPƆhcl_˧9 n4 Y>W]|p.nvhۿfMX.yĴxHxUbΛOA7$ ˿fU]A $®2Ap :SC1ԃA:rOTE|U _D.MlXyQogae#[5f6GxQk *.xF)p1`7lu+¥&LIX8FҎYRʃp-4kYh1r<ɀ~ݱE7Qslŕ #>. !?GVby- au>Xi1 :T0o Di%qk@E :dԦI`^2WV_A33;"E RrȢ`TFS: K`@uoCgd %R(^+]^=GU!)m hj]0*Uyy8|}0è47Ҝ֝yr1¯哟o'7ocbRψ3_z]ߔkK|mDpHo0oH`2׎# C!Zd > ~Lv0]|>f7Y79*jG%hI֍Z7W'4#Le$ ́}~]>a$3,;Qt*'(; ; ~~?՛0QWoo`$v&߻w ?m9kjho:4UlU|qmr5>ٳq cf[n!@~v.y/N~ҁ|,B: =_Ai. ~\J˅pwP!|L1L*Eu>$n`_,73E{F0)Zp"0&qT@Wău4"xlSI6Lu^VS8L5$r}NP88LX"X'uԔGaN1Y93jXj5Vr4rhG \GCcyl'jOT6k;5'",Do4Iyfabi Kn"#Jьd3k,Pp2ڥ:T_u:`u~1 VH"Mr%>, T(0"騘 R*j֖{ɶW)݋yaDN<ˁ3G[q[|,s>o،hWtzӛ`kRcvNV\Xb%j:uK-Qx^(U"QW܃:vRWZEO]]%*h+TW`>gH]\|,*vMHr2}au/b0it?:la5>wpIK.) ۻ̛*rFjfN`WiHM'j>u5T~jZS%~kf i/.`JO@AG=ӀTvw21M&{z0'=wg- k̾xMnQǟLc̱l23~u^ 4\|0E&X|vŵ?:mo4ߪ`TΥJQYoPRWx zf1crH8+-/boo9{KVE>Ӓ)E 0wz 痼,IQF)3Zd>^oVn D5RͳD٢yǎhzHQkO¢ϰhdkѼէ*U"U"WsQW@*Q[uz`3JDgJ*$JTbܪרt*p5{aqם1ȌĖ }V*fn?j^E^6+ 30pΘW,)[5Ȼͳt r>RȐ,sp~}$E/OxS9wDs6-3M3OR6a Ljg*޵q$/{{~l8FK"e%[zOCDcؖiVWTWWUwJdјP,5 D$'FYbOB%rVg['mgS|46q^xQJG )amwuszACLRH]a2 _Ч><%A=WzlQ(qIjP 5wIḎjTD:ERʇ r@C ,Akԁ;S$vTǢLO<ږ{J9*EnN_(_Hh@4L9E g&ZbJSQQ> (@ebd.r6Ee1 h[7܃?+eڲb9mմ/I?0@gpcnAIeo˗NѻUt;P΂J"JˈCT}سU1H_P*F=gTȠ-0,WZrzԚDNu5Ŗ~#lH+>U:?ZotQkG{/kJIWc|<Ҷ }1 .#)(YڟSZo_h q'|Y4_;~lZ)>_"_3~}"IɣK0B cDImJ: YYb|L{Q s@)$(^BQX\GFc)%P{x#p $w5n 6+.ђg-"L!1#lq<\t<9˯Vx5\[uHM\ evcݪ_E߰XM5p}L=(ڳ\^idGRJzwqqMM7H>$k )z f1cǜnQEDn?FT^1,bp:]?X+mMpӡ 5 5]G8+[}M2_niZ<|lN8n0Tu0vk{i4:{@A)&^g}&y31%Mgfamj{.o8W#] ltXjEfY`4LP9T+b&ϠYe-&L쪎e^͒ăe۰ņ㜙"RGB Ǯ7u2Q?9юEHc Z#prc$|<$ dt.#oNGP𹁣h\ZzŅM4>b^HoiRN[Pq"77 I1{׉iQ_#6CCǝV蘴o&x4DvWk|Q9w3Q<>8XIdlxB'|U􅔧.RHc !,\4P8%02jNY||ٚZ8D  bXsc GlUA.d4-kۻV:@PpXb#@wQ ڠ /|EM'hB TB)% ͭG?Rz t-CNA y|->E W yȫbN4}3J-^  iVDۗ^Tmf]\|i BˋzѪKB1 !-6&rfp&{'ФCݖVHp |h03Y[ nmI<e L_L :&":04:t[(4LjM!F \h8^80hr. I5)XDvg<_^+ ad Zq7,=ՊI/ ^EwE͉!:((:\,(C'4H3ڦ^; IƢ/M|CvՑ*xUdZKІ*Pt*9WD\QqW`lI} K+{ R p_:p\i!%T$I5<d//"\[S0`-"9'Fc (wTb]HAQQpJHq J=j@h 1Y Vq())w0)I!*3wJm3[1WYv.ns7co<;Ĝ(jp/:pWq@O[`;'Ԡ#SHPN%99Պs ep㛓r $mxi P%AD]FXnpF"Os8W0T}Zb*J7<7KHբzm?,:2Fglz[. 1z5YrnqymFUBlizP7!Fo~KlM.3Bnb&i<:=n׫yo58oCO27$.aݰm G\!N~!!L'D>Gq+#{]צJ)7$̧qFrgWeL !=i}wjCTLQM?~|>=8GsX,4?>ھk)[v͍hӵrԋ oo Ր|Pgّ_m׫o'Q3L: v , ܓaA _c Qqmޘ *bا|SH}P@%4CE#z/L#Z1!w$rf'U1Ԗy/=1d`X@@X+)h9M$=wajuLF8rq8Tt^dL\F%FX*\2uY?PZ;GuEQ@=9KƫK.K/sIPG|ׯ,`\'ٻ6$WHo+{o;X`φOg䒔egHzPȑDFc4{#KКW:$ uMz ~*SԬXO)N*PVR4[31&5ZDC&ispO^4M4kLm 뙱1$l&q*O SG'pN_aiNĹ8BKtgǧ%>#ߓ{&y&GMFl㞗ѰAkzJH(Z5D %oWT/t&/t [oǸK,I<& hTiL{xz^mH[[˓r;bη,s4 1s H\zmrXKcm 2q^Sň;XoRLiTgC-]00V) X(^Z&8e.x뜡G `T,jY%Ȇcm;!{Jl-O2 Re{@Ut3=uJ4XsGbQ[9όk͖DT$HςRSC%Q?4҉"7] )qWx2hQ]eŶd46O-.dtQ9KZ8''޵s<*H;M%oߕ $/̑y,iӞIzn ?ieK(2JR9E@( D)rFI~nU Jf)&1qGlư.ꆱPXX8(mr-%Uwy㋱R>zݧ&{bFbrkTAT9z"* 1DEY(3I7Y8 ٳ!c6T^r'yP`P>6 c"^<~v~Xp1뢶iMڭ{sYTGAJ#uL4"(Ȣ72\x$R-;izt)G9Lq5!/! ǘBxtM iLrQ7Eۂ͏5H8}^-".#NbJQ;QV&"Jhh1 fYiM og$ !eq!Ң`IW,qh4?1qGo_C\SڦXgcd]\ "mq;@8KqR'Z$QP&;4mfǺx !a>K\'Z#W.\sJKݟf?>S#,+_r: D,y,RPFsOLWm'(qcW6J 4xǒ2<ɤY~͠Hbt4Zz7xEof3>8XN'ޠ{ia8n}u4q8]1mfsSJU4bK9 'B]A;8wČ D*J)_G)R8tR¢їb!64 7MVSGS9p1\넇P:U6&Έ8Rʵj[9_GR{)q0-E @/`bZro-J8P4VǶHFm<+bmng  Ne)|K/N`1T?_tF!5 ;]AS{ةc{{<Ύ?'.) F ́?o/?MG磸<Ûͤql$a E%$ %˭.zJϪ8ῪhY砎$v^Mkxgy~]64tz`ktxA˫ !e†)]x_>ig~٢,LGq @qsq5s:ieh!VWf/ xADUEOǣWh0/ 8=|9В|1L˻TƐi#5dg`:+w t[ (%]L;mazaZQ8? ~Pp0qA=/\=LZΟ&ز~\}4ܳ,;W4UUC W/@e+v \eiURpѼf{ڡr:^ h%䝡0^|헗}9_D*!p$%cHZG+xɘ1%.cN#p.} tlYctlY EzO"",?J,(M?9҃N~APXJxs^pp$ahd 4PmC6oPd ($@r/MK[ι Aw٦z滞:h6n:kW}űyc'o<2R48y(Uzu6R_B;!ir.r8(6pUpRA< q7hR!5_0pgÔ¿^ Qxͫb;o7v\ݴ>Թ 5xw4ӬR:XR*(} Y7[-L|^ eO=΅dC`guHIERJPF`oHNd2gxz{ ^rzn$Z*y((\T(O n.nqiNqA?|CaqU)_E1TWTpy OΆJU`ٟ{~ot=Δj޵Q9LҌ? ŕ>M.3AZ~ϼN>N-f7J闕0gn?'2s9 A?tAro>?D]K[Ζ5˛Z,PiC*|̣WAw1eݳ1U w:VU0ƙL:t$7,pu0X RacbPMCekjC%Ή_uw8ptͻwǯ~8rtk\q#0e"Yf G?޴M5͚fҴn_;ES]^MA̞;&=_r+bsoMw\jH+ޞLtjGW~(loOQIR0,Ts3rIhbՉ^?jM#]VW$Gtp w\2Izh6': njHfsIzh J}~ߦ =BGЁSyea"1Lhc(w'3\:%dQ:89nKYZey攳 YvO; >Ni(U-_kUE.&L~Sfa]4>rسX*n{쫑y ,dO:Vec_X[{XjO#?~\5N'ɌeCjPG\Y(𼗑~{ZE Ί?\ڍ~k %|:)O*M9'> ʰߠ*s]67CNr-qi{(/"ii~A s#hJn+uP:fy=6 qk߃VȌ3MlŜnOd\TB2ihL,%e%EN8ѱ L!PG4$n*:4%N$I:IB8tAIbZ)",ACc=\Gl{/ҝ/83kd|x\]ߑ=Jj. /_p"M/r66؜ݝYZ1(Q76Auuۺm]ݶn[WV2luu[Mi5?ۺm]ݶn[Wma^pJ5{čе'}ώ)S[$r2RAAΕ "2JqM$c ΞUF$D 57`SzvÂMD!b.֔z]ІH(ڝgoQq >ipTi{+ҔےAoi\^"oyNcuy}bGWWApセ/{?_em{F9lwrl\֙#<1rf:f'ԂqV{?ʐZi)mMg2/ggښ6_a) Ĺ_\\g]'uɾl\ʊ(&HZoAPdı-fcML_r0^͠pË>.)l(*z+/[(α'R& uXD1ast|/# >!ں{9.jSꬱjJ{Mz$C0!T*)[X1c2b=6M X~BZ"2=ecg}BÍbZaߣ9% F#FȨGr8(1(JA1ͪ!I*h@.FO)a%~(Jˬ٭|RuY&w\ߜJp699(Ix=$˓OTrrS9-{^i~ wlF\7pZ7()&=)QN t3#XP1g&HFljVɆ$P,TPuXxT,\pm:Kq3KtHw&7ɯf0}\'6%c TTE^%R%{+QMOU42JlR!`V0/CJGt`ZD"rRNOnĶ_q1OIǡ3P`2G0P{B%˄JChpZhG9q`))u^<\HA mNi΁'MFbxɝ݈]WmˢQS&%"ΌVphD Αd<C#1&% bT,xZvx \<<&!Ɍ0<<-nlUޔpU{u&Tǫz1x? >ՊIN;k&"\L _XhA(J 'sŮov.MXɴB0披k-D˟K|{"g"qѫe߫]&Eʌ{h {wÿvdWp@kw_Ż^!v(GRpVPub̸(^sɭJ)GJ|vt-"=DhߺGHMQ8#p$1U> C ֖zܹ!K=!TˌG"*ujIH0<)c"ҙWZNi^s\X%x:}[k|yџoF^7},kSGcp MpQ‰FύbFc Y)Rp˜pq(dS J#ƃ Aw# J21J  ,9Cm鷟uWz]%qT::OZRv 'KjERБFB)3P9.0Ѫ<<3LzZC<"NH$F9 H TXd(8R%R&Ox8P|@(tz-ʺnZG!C,(gwaw38ݟLRO K.4OƙXX/4`xĎ3BBh0N#xҼ+KGR01 eڀ1q 4 29ro-SlsVV~P >ê߮ټhpnQ&jtm@Z ? T|<ً ϴu%fԜ}xZ>oLӿ?Rw#SW/?HXT?!9I!oMC~WAʙ}bpx^~ B?oVIǾqW(fmg3+h$ @}{vv16ri*[}{+C*5 j=l’DW ,Vte&!7#~3|TpOEnכ1UZ3k +kjͧJd6_[~^-{?%t5s 6 _ z!߲/Ni~4#\) bz[[&ݑYh6BWm3"8`<^պ;#KgL^%(ѱ c{hh૬Ť)D6zfϲ s-;y7W9ٻsCVvW³UslEM HW\)#a Jq0V(YcW zz񫙂82}s3URbcʕ 14sE@8*8CU4(e$.eV_MlT^M-{5:|pvc#.euQJ%q [=Pa+_tztXN9zE- 1HS0ea$fBp0At<]Q-Jmq*/[$L?cTF qD "N.yO05)|fy+Ӻi:_]S `MWcyMLj!%r/FFs`%FND$ш8tJd(D5e`px-Xhm 4wFm6*GT KIwPɬ٭'ǩ.ul9[^{ՕDV%Wty)DFM61{) F*$m+1H7<VSE}Ox瓛/b~_GR`l]q~_˸΋}r9eȭ*Q ~PYpT9KϞ.۳ >iG\T6ih3tݗk[o'y1-Àڲ]6,6CıWw_majU_ja LgaYsOP=+RaYw3M]][s7+<%[;heksS9RD815$E"eOP3C;߾KO }I;ܼN'Szl7ڠӓߦInq{CZڼ\wDJtgr$xU]e)~''oczْVBuhMl3iWRiЁÃZ;:E/{ZǾ}cՅz_j9R+.K\,=,#XYӗijt2yK.Kpb$ټǙs:Is>գIbU4 G9 RYB\I1IvR,DKbK?hP[ɢ*8.L >8'[I㪞۔|JFg pG{PP\>ǕJ\dl2#9x'+A'27b-dSˑnprH!GI Nzc3(X7 ,[g If jR: ܐd A<&HH  beAFfAMmfV+EK*5>5VN߱P2|ONyX-30YvGј?nUQysrX`B+sQ$+TQy5Zs .0ᜯۦ؀- C`=mYrAГb 8ɤO9+on\+ j#c5r6#c=]V]*c!XrBӷE33:)yÓ?{fpO6y!* $ "@:L,Y\Ԑ<t'm(AfUܝdy,X6Ѩ`Z>,KʙSfBeĮFfk\11qǮ*Q`׶TQ&kP$8E3^iҘ$3)ŢZDZW.EP@,d "+HXE#hHq$ >`" [~qHXX숈ʨqJ="~.IؔƊe:gMf:L`@ZEBJY[R= H66NZI) U" X\i$Ë3-F lٌ7-u \,슋2.{\\؋ġ̂Uat-VuuGpy*F߬~΂#k XVQhMYcD<3Ƹ Bu4-0l&k5]KA ^%.6^;᫧ʞ?V1! GCRL%f&t{F΢*;>ٝƳ۴'S<]g n4nhw̛޾9}c87$b2e3\.KN4R9՜9`LJI\FE暠kХ6Ao|R^;s%dR:d3(Q)xD2˜%iqs`}%K>7rwnyE}Ӣ:vY7wz>m;|HsU]Y %p@޼q#TL14 eQ4yUNѾ:i%n#J4GI.smG:fRp:/lvXMj)9HwY\\H}'dY`u.+NpR:O,B2kHAO1X;B[M{j'kgFϨ 5UC쉭gGbLTB<0B{>aD&P/TӇa Cټ6jX$(>ejX =Dse[{[æ/ce>2MKnHᝒښݢ Ja &1#ә{G1O\;_v)g_f2: `@ֶK +q!:}It06SFΖGdcsDO+mv]x5I%r-iaLmb}3n[?t&Ds1I;} =뇧鼞ltz"v@z' նv~0Û+[z~k[]dxV| '~JWzMy%k.^e0:SmăHj-ޕxW"hM6S5ә.2@Z5R:(ՄLE!RL-u{֏UW{L[<9`Te&RA'!1gkf&Yb6Y\P1΁ytvS:&ŎAԘ hxnB;./(sLL -2GxA}` Y/KR[)^S{As6>&`>`0HEP{$M N-2N*6 m L&/=}o 뀡Gs+KnB||'xx^H(AOlwSTܕxnޞxZy>dA6\$$c,.,]"Jz<5 ==|jC e BZY&B$/YznAI#H?=TA8(|~/!w|gFoh%$H'P; !d' Gyi&FfpSOꏰ }+wUV8ehQRj[va3퉬/">n>qW hCs: q^D#—2 .txWyu!Ȍ]@Ε,rxHޣҖil)G 5B AyC\ȮOKaڞ8Ep~=y۔㧋옶mox8`inν;yqGLwiڧ5./|9Soo}ÛwZ~u%隩?Gɋ=9?y仫_Wh\sF b b~+v&/ 9Y˱Vh]\_?*Ol-gKPF-ywxvriAs~}`IIx{1 B [ED[0F/Ntl]Crv*=Ub1BٗbUzj9wܴCSt-ٻ6$ %܎܏$@8 pOgTD*g=(QP"6[ -\p`U6{>W G(wϛmo^B {Û=e)c۾ܥ@ZUfW^U[o 9d4b&C;e@M}G{Lg pz6cYELRW :+`\֪]y0'kN}MZ6lswh/֬J=޼o)윉#^z.]/ox%vsFIJ ZhBtβ1hpt^cn2C"3P<ۯϮUr}Da574)!, "[$ɀ׋q=DT7.Akղ' >}|7R.y]ǝ:8Jc>oP5@yX|k!}4ʊjc]Զvh4(OY֞(: (ЁVqv Q!p4!4f*G,ROmZ 1AxBdi׳u9kTF]QH]њxG#WO7TZY=+Mm?Ȯ;l rs,YG1迫Y,Q!&ӳϿOh=#3z-"|M9Cڑbm) mىygfrBVIͥ)G@<<)tnݵ zd U-vm@7hdqq,<҂F0g7r)o7P=?=9EK|ij|+Zmm /wCz ̈)ثPFze@LVrw+SBW!z,©N{M8uPRhx~(St;Mrcѝtʺ)cI un [nkq{k~1΃V}|CmϧVdǩzlaM%@"+{e5܉)hE#cs؞b]u_a\}%rBPh- J*HQIr1kkZGD ! 8*|2C @: H2&2켣@#EJA8%$xqyf@ 54zЉH&Rzʭ72!֯YQ9?h87śco9Fx(xVο5y2r1!2o^a?oJ~UϺI5Cq+d5Qr79tr` si)98VsL^LOݵٿZh*y((f^''JeDÂpmꙣAq4mڼQqCe }it/?Nje{5mp"}TscɇW~p6FmT39pgG 3i cuiNU{{1M ϧ,f׉% 1'`OfnDh<<+d+T+Fm"֍#yUððZf1om3mx4b`z\9*#:QWUI|g2>IƩj%=ո=kE~C%įrCut߿>?~2}?58F*6&`6 ?e547kXgh笻Ƹmb:{>n|m/__?O}k1\W!xq8T{ĵ UF- |3Y]i4P V_oKoڍ|Ir1dc>I/٦6{቎$z`8J6IzhLՍKFfb)$:p)dt^dH\FZ .wrN Yuz9٩&băDl{olxFDyeFӕID쪒4K(dQȷDrO"s!7Km:gaPĞxT~xlV9_d2FV'6J@fVDUy|i;,wIF` 6Օ5kn>i0jq.tX! t Gݴ~K6uV Ch37+ ۗ,SofJf6+KH]!JQW {c{gjuuԺg8t{UW'n}?rXWH-82gJuIթ=dċ /r٧WԦvu#V@͹15 h©`-v.xp9)Yfm!ȌbO~U{;d/P Gv:Gsp^ hj9>/=^yk6KRqh!H㹡DǢ`,D bs!a$D+܄i׺af TS"Ȓ9tj/R˨uK"SY4Ғ@C7?*,ިLڛ8M֘]WWH%g$CL?*+ƺB*)-q種H]!ku*SKsTWɺ1䚽Zv^]e*Į23} '߾}Qu&^Asx1y?(LC\yI d4A Zmn)?wf`2MdYym Q:mkPVFz^ӈ>̃MC #XbUP UA*(V*XƺŪX/(VŪXbUP UA*(VŪXbUP UA*(VŪXbUP *HsOH{/12 SAs*hNͩ94TМiӇO`GE\PRj׫2YVin-7,jJkCP/"Bx@XnC}>:@Lgg~@:o$=TU1O^sއs7YJ=!&ji.*`wItOZ#U!2[7RțruS{^tu5?\]Ah}*@՗OqԀd \!6m|ɍ^;Zvm~}V37sot?bKNɼ ;ׁe؛{`PKB.mVZ<'|z G'S#ONƣy1ZUm7w_㳃6s_$^<`d.j?>ؕAt2'ހ6E ZݞЀ<9mm[+]I$h<dJר sE4Afc^ZYAsҐF.Í\fjFU !9G 萲!zP!Is"#YB/'H}06ĈM`U֊"i5O̐Rj=lKbMOUu12x5sf"Tndfdgdl+XZ,+N9J6IrNܜ;6qwzv6%c TTE^%R%T {+Q jU42JlR!`V0/CJGt`ZD"rRjfĶoǂ٤cW֙Q[Zk0G0P{B%˄JChpZ@G9q`))u^<\HA $`<D&;""Ay"b"CdI(z̃2}@+@zimK07 YQ+ |:dHFbVZAങx҄Ij$f9ٌxpqTuL{٤dW\ęq N29 ;cxS ƀO$0zOptC-[\. f]dC¦:iFp][k?c-BϱX_6 (kC3S]#GM5YG?o{E#~- B!R\q} 0!igӤPi -%X wm ~!W 7`+'yO E-8(RVUx Wy3\~l^Ǔ/_EXi{giEHGzm@!oICQLre=q@؃aTxaב2(B;NZ'q7 $;[{׶߽twGbR|$@WBsĔTsE;5|[UDd+o|i5x4Ep<325[ > JvELye@}l = uˉHz*밊ƥv&XEdڃP>v<踒јI&rgh\Wk1?PNxH9H16F%3V%&?ELcGcPj6?=3ښ6Ynh;nFP$"O^; &uDhQt>>#6(λTw=Z-ۖXwص|N݇nw*6:U#tLq?4<0SVgRRN>WU N&=N/ͭ9j9ϓO;?2dɍ#nSg3a(^O__v_Sµ@LQקxnXEûGziUuHоӷa`>4Tp /ֺV&EDUZmHB^|>\ec0-3#LyRXmDdkZkj:+,fGWT@QQqMgTa7xB:J0^jCdTH#Ra"*sL(QmGs+#؈a(㎤4@) C/*%m(NywzUbkI%ؙM[g;N ^9ë#x9l~I T0dLTLqkGn󋶐{dl)X0Eьr)R:,C"$M}ihpp\ wMV_AHRShX^,éMVc&ye4zl5ͭ66vPwH8"k_K㣇+z!%*=|]SfLemwW,mQ LNדNT؀.&{WFϟFa:{ DcֻK~6K0kۓ]7(o&^mA1r< 5&WϞyߪ5z7*4zMhN2tB4+BsǮ&A&[ar ⩺#U;eBS'Q`8+(պ`^:- n\rk#o=S|:h6gYA dR}le!hy N82IL"HP@22ˢ;4ro=΃;71mQXPr,ON<Ɠq*g Jt18JJ(qIF1nx,AДV;K$"^!r- Fb1) G:Ie 8xc! 59Yr{#sz+/X;DnEjTy?}5GihRK)}DFH !HŔg FQPUhUprO} =+E/ yD(HlsVPqK,6h!7L$y O2׍&XHIo7Fف;:c 6XґN.x ZDK}pQVɧG^爔Tybnk01E=KILfrxRz}p;ayhOc G8#Бk,!8`X@zH0ikB {O\Y=BynjH(xKIp4 ̑#DxkUPZ 焯>NA(qyoz[E`輭trO'^uz|(pݿ5]{BT#vߓϓ'[ ";x*Fgg|%]Wg7 d*T:rYVqVXVyߊjF~9 S@hܸ?+sӸ/2ԭ/"$*W m%or%I+I$c) 7fszj,?' Ao;qؿ\\S׿BpEa: .ПY_#nУHu§*LQ :kfhJ-gs`)=ET\VSGYQun9ړ^t)C/C52-)WUٵq:4|ڴݯ/߼;^9[JʭDJ_xmeP ,x|nf];}j} =A<$ ixqvuϧ4V;{g+^>#fjB:?ƣ}/!%2yu=EɇeR7c;o.oۯ!A7On`s}ٽ>D_nat?줏cj:_fF-=$fo& Ԓ)[{sw65L^xv~E[A&A-}oN3ϯLIbtJ~ꯆ|˾8G/ɪ4}qz;3OL̨kFBڴ7͏Μi'`xyԺ{=TNWLV[d O5|*1lFom:!`ɻvNLgov=Ghw쮳 , XC;%ܛ@0R&h!#5$Ff5`Pp9m<Ǻ_%ˋ_đ!+g'WF}Ɣ+@Yci%͋h)#pOUph(QF[zI](Fa\AldcnSmL-{=(;<߹R^WXpRŎuںAr_557xt厭ʔ;R}?*hF YH9~<rwzX:6ɰ\ٟ/7cWcE,c05%ᲺX5u[9x9r FLlSjp}օ9 S>&]'?F~/emZkO78>8I07"9 Q kLO}P7)f.O8o^8nٞRFq *([q5/}]osiWw &+|־A +IBLCvՂYMtZpm֧z:.cuީ˻_+f"~> /O`&W;UfU{_yNuz3R ׁ ń5O; p*I=sW4mL;.5dAe.urS4<0 m*CA ƙARL %0Ti#ɛ=˔RbN)$s[[TR$8"ħZ1h1``^ikin$ű2_ nl&vf.>@c\U]~A=ld%;#4d@BYt$g :' !JIyZ3qvRCm ++VU 1koELϥY vq\8soYl󨹳?mS~2k Iy VPZHbA+ءQ4{K1ŐQ%@mҞ!1Z?I8ȕq\%UB,I* k}@CbMbEӓ;Mӧfd]z{즲Kd{ۍ{G HCPCdKMu|L}%7 f/1oKi26?H/IaXH)HVsK7g?ށ׶ť1wS[3мn\1DoYS.O(BLZ2Zu*imUZǞR+ڴ{BV?wҺCd]U)QC+ uGso?/_Z*g߆#޹G(0vgg1[R75~I>ŘPvESZI}g"u6ʟm+5Ɠ":+'sT2&Q19U$6X킳%x|.\tI麛B;^ ϯIތZ8+/+T0,wށ @^áKr͡*ɟ]L%vOpir K~n=ŧswm4:e9hRw)K}uS;/T;6O>ޜ{wx}Se~=XVbٙGjmGq񢶤B-7.>!C1,B^a4'] Ywq 72:SCٝ3Z'b0ԁKK>:s<%{>DAM@GP:-8>$_jJEl^]ks鱴# DB'Hs 1zi IUH] %Q1d>&^#Aa1٬8kb"%s!"9 Mi 1Rfm[jJxe (>3d"H?H!Gֵn4M_jǃ0 L7/k#LOuyd̘rc>J +_(me'c>ҜvR򟯟 YS=. D0NtAjo/x!$ *Yx!BygJ%Fx\*_)J̪RMX V)Ngl&v44c__}!཮~ȸ|qqV?,- ]&GW_]LReFEl>S&'u&ND)1W 6u1< 5 nS[d<DZ@_d-H{flq2k!xfڱ`7i5IF6+2`Q,iJkD+A%Fvo \ J2#Ci8$U0YlrA@jrumH3qyPTxfG\jm5xď3)G`"d ` %Q.FaҖ @QVYnđ2N Ŵmi%vUGUcl%E//~q#UEDj32PjVU!pLd%ȦvdICxx,vUcpa[ږ/Zȭ'q ~]=vH(QI۳[D]J|D$Bjq䉐xl{؁!8NDAQ#N@qva`:>-gغl yzX=;k<zK dqm8ۍujսcֱ)U_~Û捿tVQ6ߥps0f-CFǮo|t6{>)/\IFbP P؁Ԣƴe.S.pc``N~g{ d9E]7dJ9 K[)hRHiM>d8s>/Sd"3 4TFlMJ֗f2J?TVWۢyCI旰]^~n&=g.T_r"P3DS@ņRG&TS=BY`qeAX큧1i@ 5Jkѻ2:O& xDz,ً8;n AK9K`L6AEF!:ѪlM!QqW(Jق26Ho#g_]߇PWT+E60&%$)EQ?bq28Br( V:.)bt41٘J?d-F(`3(A*{hM W%V&?Ta#)vϽk[wAh`ڪ;6DN CvOП5B;CvR!2dVQ9?XCΟ6KlqvQhCa3t B9qZc|Y9?v^L]U3@eT6ZɈ)dj-JKt̬> X;mb`=.þKzoy ~ʯ>`X1o98\dDfV?Zuј "_ۋdGҔ$wEA-]UUӫX۵#B@8HIC8Ca/IGM&Rcь0o_5:WIP=T혿?U#f=~,tӶvX6PE>^/x/ p$Jc8NU}P-+/w4GWN`ER`!d!O%|Pxm0Vk)Sfh'=ڴ.JkI@YXc  btW9.};=tCԓ@5dzmA]d6'~Z/co14hܥ1ηN@HtFСecV)84[Q//ݑK;TB SEؚNlR!6ڝ'GK§$kJߢRBSx9c\K{,'}]/-C/W݇n-z/FftM&T y:5>7z׻ݛ϶ zwhet7|@;W<iz/Swy0L6ywX=0[΁>OƏGwTs[wvɴM{^3>_}7GnClF9H X`^Uq=JK C,넅ע<_![JEU)3MD=8A71[?;W戦MR|򣟃"T8QIR"JH[Tv;b$H8rhjtH9@群s/JǴ'>FWqsǐvuMϗ憷_ֿ^!_m*B+P .R4KBe#.ZV S)*pPuypp 1XZMV!sJ'deiGJR{$) m*|1<wO+6c$.w8\z 3;KTOW_pj-/^$]Gs./"rhBM.qZQx־D5q6ZHA )I%! 0z vq8@3,-yG:Te"1j ڙL@!XB mYudw H]Fx!2MqAژYBL`{^Z J;,z!sB#j`ǍѫlzOQ,˅<,J8G !p/HHJ!S .Ob0ΣE׳xUμ?2QjyL[;^OvQɡP=5j_&l 㪝&_ }j,]fwzϫ)};<{M,!N?w`n'77yVr@V?I7MowHgHc`JeDuZʵټ4?a량Uge:|vqu{5~JNQrd :\_i>Ű˼oW՘3Mg"A!ˬ&j)4iygj%N3)dMҗVKMdz۫u׋?@%w'z F5rѰsH,wmmKdA<869IEDd)!)J*}{)^ DrG$03=4*dtiգ?gϓo_<-Vq"O2eFTR߃-g:LnЫ^WS}r|zm'#໋⯿"jwݾ/?#fl/tb;?} :SJ^]'?}*EY\m yfvpr;<7#o3WϷ`sVM/O?jFkf8-b b=ͰQIs~." 77$WGcqU/^̼Q/Qstz׌߯ĨCVT#k}_w7Q4 :=yG9算}N~_?e"؁ +Ļ 5߂>Ϗ*`vg@17gpEޱ/QR -Rui>853=0uu O瑖 0u?ԥwI+&Q{1x8]~} 6DGo6L/7#mُ#ԫKzgp XoC;%ܛ@2¸Jw֐,cY#,;Z) #~,/εiůůB $gJ )W.jǸ86/2K  Pu&MM/rˮq._kNxcΩ;멷wey}[㾵>ꀁ 0t3H0C+< nt׃~~E$wWuMa.l !3@!IQe%_w ?)Ħ.hQNTVΤ^I2ix) +c 3幒ʯ8m^ɼ/cM6^=V՝Вst\ $W4w]yN@<[Ƴ)U7aHs ~X?TQx=(g|sKK4oɫ9.Q?ϦI=x^8;?qڿfz4A^o]7H"c/૨R,sK9J e6J!ڠLjL 9Z4ԁL#k|SJ5:lRVn ǒRI pz3R"#>V>Ҋ9GM4C!fH Ψ ?gKxs^sXe[k K-sGVVx`'0\@Rg(ڔ-@GYcLc!=yԌvьؾ[zk_^{ZkW޸G7`9T. ؋]HfO\x >. <Ȍp"udm|wXV-}6oE~.;ܟިȥ*6Z$RnA;p=l#iZ kfISOPN&W-[, Y&eHR& ya8[JJOҸ aƩSaG4PɌ|KAJXil0!YSXc0>jWXs[+}fhޫ|jn&EEza(.O!CJ"$b: 8Yzш8tčg(dk,9jU<ch4BkM̨3̂)GT 5 TCe nT 8Vg|\hX^mW-G,b Pl éɥP`Gjk~'Z;R)X[Jkki2=ɏgҿ9-xT a4ediԫSLg`kO/},3/~/g5^ SxH0"x L)/$0Bzљ!c`tb0F=N; Ȱ%{gh)aYrk4)˟Ackc 8b:H{W;ZmXr=u%h^?y &0K7!@ ?Tp46jCX3((dQ!ܸ(h'OC7bއ6Pʚ L,<-yk8pDǿIvZ8`E}}[;7gӀY2tjiF8+J /`qà7z!-TqX/jܤUDHIw[+Za@"B'M"F},,VQhkD*I.%Q+,oxL6ݫ; '9qVA,d K&u3C0r/ΦEn /4I+xO9WJIQG.{f8LTKys%Mv"J0]"j sJM0et"}s^~)Q^VU3bR{ĝN_uN%4\X2hǫ4IYMƋF;IS RsnkE`0IjB=?ls=ٷSڽVahb$ށRZC/k[4-x_4'!EIZcBZS>Qn8p'W_mRp&Zbab @ox=71˲ܤ?|K&rQ"at2kZ^"2P**hj!ψZrʟt.Ӗ1(;d/rW.밵d|<í03]p䨱u6 \`2!0I;Dalՠ$\.%6!HEy*sdgFR"%`-IEd03,S:V!ʣH&6WJzWs??Yࣰք s u40`T(BKOM0 8 EH LsE3OQ ~dGam&CKf@b2,H3ZEtn0`9$]Aڦb*0uE kկ;`'"v,16%1>C"Εb:f<׶wO`Nߕ+aU`M@ZX.*,% f[•^F "Bj|b`>'m24&?/۷ouY*ɐZߓm~ ,ۿ34:Jd5QΉwp|k_1y̍itR<#U[͏/n&0ɋpkzݫwR=ۚ(?Lr3`jr{-l+zyeO/zU7du7drBCOZ0~ﶜuwnpT애\몱rk4I 1~0rqq8 ?(lkoƺu_7?^xu~<p|30)tW$ztͻ欭kXUNx~ms++>gWlUGn.@^$?|z/c eߖ%qw2;5k.p&'|1.nMȩ~*-kᮋE.E̸GVPU>yY_/G9d3BgZ2DCԩ5q;")Hƃf9UG#顽 垗S8;I69R'`qx)q̋`ԙ<7)WST&MQ!.U&e8b_)n?STWmej>lU%LH>*e<rLM53P[~CEXbr:o;MjU FXx]&yQكřSSaXLGZ2^~ؒ ~H4:]:KґZ..mlfVHSzL-x L]mu hQxwp#qʒ&dK*0+!Kh*gs#Q;{WߨGJ<DVWv$7OiWV1\>hNQ^dc)P)}z8oNB<A޴\s9 Sfj$7.(]cpOp_c;Sknus=i:wK9jci g16CFUy mBiLHnNsYOk;k^VIhC2j(UÆV9[0f %xDMU.hY[V+= RAvT5JZGviE餍P4BZ%;6:sP[r=JNwh A^AkaFh4ӊ#:CI0kD(QTzsPy߸)X4ypl5M):+BFCwdu}О^=VFU,*aLU. )ٰQUm*"]hٻPl#ZsA)9x X5lGD&0VN ddR!}EБ$xӜ`AV5*(*-"w@iLb oDXqvL&j! W۞dWNc@!7CAdܡQ lEP C='9˦ o ̈́b|nRPkJBH͇*QMu(9oL6 _ b7ڛZ(Sќ)E019/P4k˒ (Ez@?Pi[ ()X|A*AIr"޳. RD^ϼad0Wz,IS Bys #_'%CࠄfNY"1A! dbsZQ8b!zwi!:3=apu2^bA\Ϙ)$jb̠JUYa8iB0`߻9;\G~ T xV2GjmlAhqU +0똆g a (hV( {By[ɐ$RQd"5B5 C04X yy{:%+&G>@'[nmG⭐p _ ,0PFu5wNu2<&[WB(NCO5>}p9!}3Z|2"d|J,UǑ+b FOBfȈh2 A]KrmAJ<.J3nDi(ʠv0FϚ%\)䌊 Y)֎ a<Qy@1 bu5;XfՌ.#8X90 9f՜%!zse_3׸s'$fל2Ɏ7 PwM< pff=+[{ XQΪ4^[fZkIQ3󬑲FhhQAi7W-9.2#lRӑpPaPDHZN!FEtJdp `%.(H H,@z|!=`yÎWVhm'"$uSjCn ẟ,;:Y'ycKUA  PeQERPGq1*10t A-и ᧀFSМ6n-*T1ri+5֬Y u(P5jyͤ@LQX BVcږtz睃|uLP3/Zi.GY;Tڠ@4XŇYXkʛrLopy/FFׅ̆f@S AO|4h5PzB%aPJh↞$5ۆ J`Yc6\WAҘ"b9R Y 5~BERbq/z&A!iOߋ݂7'߽9dwzkU64zO [oVoM/M iK`x/UsǗoVjx۫թ[Qv+9rÝ*3<(1Q]$QjG|t>.xDi9bGddd^ cK 8,gڞFZNWX#5KRW 8rMNW%#DJw3C'Rvf]â+]vQ b>,nLK+:]1J넮 ̂芏mN\i)th#:]eТH܂ +R4 tt(:BrY~ {5F--C+F酮iQtŀrAR&}tm)Q>EW CWKo:]1ʔRW ]b72 ]) vi1tpY ]1/IBWHWԡK6Ǖ(JWοLjLKxD\IX{iKK1~e쭭4ydϾ6[! ֖5)6*GȒLE%[V-ccY,U|Nm ]%םJFZ@"`m J9o ]%B6ZU{*Ԣ t0f X5D)tJ\wJ(5m1I \ܘ\)OC)YKWHW@VϞRJϞ45ЅkOW %-]] ] Vv(ңG=!jQz'5Im3\ܱd2)"b\2F;D ƤF|d=z}s$S Bc%1ˌ 2#DSMrh&`CW jDň֝J%ҕDk R"E6\՘eF@qH u:)0U$=xB IV%4\јV4d4}4~g?)-\;j};R YI}`yh3ʜvہx4f[ 'kv=ٳTJWU+uS $NW efeZzJQ58$U{%J]wJ(붵+n]QE}o}tph3h;]-]]]1%4 X4ǺJp})h; %iK+ѲAtk]ᚨ)th5RuuKWHWB3UJhNΞ$L4f Uv@mJjMd ACWWЕB ʺ$TtutQowԩHW}3r?Ǔ쿡ZEV*oVhY奛;QfL);LI|+ h$6|[󞏳ħxUxݸ8д쫼c?vo]_M| '&f~oD7^qe1[4"p49\Fb3!W Dz/MIZ0\-} ]d?n+i ~3~j~=Rx榆{\]?pYϻ{$p.ykC!X~ۛ"'owsb'kA+. t{ݮ 1!=16ܥ %46:L#ywJDL8_[CfOSh3 c7?AӅ>#X`T9F$㑅H>H)(T1h#Hj$HRd82[Q 0Sk\#7ONWܛP,/JcV2ggJ.F#6z5P,A)G$v.du9q0κf-:P֡7S޵Y31?i23ʬ{;_u;0rx LܫOӃ t[LgOxƮ%4X!K_#'IH).iqZic(D5UNcF1ZleF͝QFGR*:d*<{=8/{կrgZ?5`:ʧ`rgeO kbuݱS-d( ?~6az~T}?w|d'4*3a6l<v$/=wMg $&qXR)"zO?PP>̀1ពvMx1 Y*ěwtoRYA˖n<v+hh>?}Dx?H儽Gp_YJJ9ܳ,8/nMoE8񷙮Thp4g< T9y#[u!Ʌ5w!e\H]#R…tY|#="P=7R(*2DĔA҈+5@u4D{8`a֌ŒLa1z&{U3ek7O7Z!F5 cXйr<uh}>xWYM:ژtrؿd;8+#G \xA U/] SaR %$b7ZHXG"\JlB OǨU1k{Z4kYVfep\Xim5:X#&d<)" |RL]fo:9u' u TE-E\:$U 6p {̀>IPgWan2fpMLA"E}kY^BfFxy)*%3ZdqG=Uʿ4:JiƉ> őh+33f]ӦYxZ?);7^-八m.45`Q>wKnMdE"ч+W@jfʽ51CՐՐ נ>?Lo1fg:׃}`_몑j=V$,~XH|;^ch;8oGS(P= _v>_swߤ_o{1Q^x`Pؓ!4 TVUSŎZU ?^Alz摒-ف[Q.qz0sc|s5 r#-lF͉qə_}`_eoormk_HޥٚGK(C-S$ޞo}пYg?ڵ&N l$yLqԒdRkwH$^dGzɆQ !@\8ntᕧa¢:s)bgYՆ%lG#jpl)5zÞ@rE$=[`H".0[lʣ Wc:`2O abWk%7ih 5(w+LdDSBTH'] +$F`%>`JPp#E"Q19pZ_{L52j<[֑aYqGkZ )$TUa_6 i;# 6ކ3:V>glx%SGm,9 /m zjg(<͂x,{zE[.0KCμ#"g:Ö U-M\H ;KX3%*W (k"%gG˪,{0rHRm0RSc}*q:m T+&Ug &j!l2MMŅ/߂`"10PH(AG4\ Fp-*W= l۞ti5>"8cb U0A %;O ĢbA2>6BT8.:q)zxLGhL$xX+xxI֫nt# D:*a,FxN8R"cpLNhsiuF:6 eb vU cm5OӁgdXolޞ\vg2]mh:z2ċ=C4_/o̦yŪ#c7~B{`?0F Z7뮮KJc̱|> cKy$0*fܘk2Jˎz`yk-S3LI!~ez1Gn|9aF :|2"4v̪r|t:ǞxJDֶ63%VN0-Ţ.{?+G {GuJJ*J~S+e#> B 9PGfj6_S {L>4qz8WE$%Yw)P"H:JdlK᰸}V!`{FtLkˍ1C )@F.ڄQ(" lG R 飆5iB]fF/cL![2f#glJ6Y3v*,T,ܫ,\Pmv4$U7mѧWSW\=U A<aCQ:` ’sуiI}AbYxªǞ9NTy !{'Ya %IlR!`V0/CJGt`ZD"rRsEfmGC01Ejg],u'; vm8/s4HeZp8hic" N-R9q))u^yƁXȩ*+: D9cHxXG M,K> Nհ(1( +:I6@dL!IPEyPVy$FDģ@ ^ZBR: $o=Lm8XGHLrJ "i΁%MFb@yɝ c EיKv8\ĝ\ n N28 ;exST Ƥ$I ?RuOptC-;x(0w*IfyHv{a ZWY8+ȭ1!*3j02q .bDd~] FL7PH1W]tѤPi -%X w] ~!):@HupkFɘR*Sȴ+, 20mEn`+s# Л@2y=(U)BXX* ;#&Ic$EhyH8Hg֬e W'WwOScC&}7-s[Lj3O짛N>w+0EK@\i@zx yiBϴZDFWoqI&9$8cHEq0sH[ 1 2(0գ^$̊^w/Uw,_4W 6V0#*(`FBd]rݜY%#0.ո$.JbR +l*íKnmp/ە}@Saf dJJcuLŒGQBT@Z,T(RjmYʌΩu})=I1XҼzHջ>Vbp8Ḥ%7Qh,"!K44SAg3D:xfC  _`Pe$66N00a`V$uQZ`Aa¡%l  GՊJ-#& S:Jg%0""TGNG{"{ GхI5O;l~$"mݓ竺'H tlw~W_U?pүVu[E2rQVQYZ~ߊjD֟sܹ#(hl4J/$٨]QQQֹ lR%i钱!8ipKUcWW`o^z's{FgBFpɨ:OOC궎;ǰCOC0UR,u`PJv-GNUZAƷ85ma5emyGFǚ]x$t]HپGVόY[B~͇:/{ QH ,P>̖׏f#7,ݟj}ǰ3MW|K}4~~}JwՓ7-miwwgӗ==_tjt6]OIMdήRۿg`ޞoo& e ?Z|D_a?K`/j:/3N-ޖS|5DzOFƥ95C`A,ߞ=fMxM4NYmkOo̴&:uE,V1F7(/oiiqK/*9w))5pyDHJ ꭰʤ.UZK 0/ngj9\խ6EgoxVl*5/;]ܙw:nDD?{w9W"i &b$+ :;5$Ff5⠬P09;r;=]9~5SG@@vS-WXؘr,VwGɹa"ZH *8CU4("xRLì4:oZ,Jd.%vG̨zkྮq:h^w#W`~rܓ?QF:MIO(d{Q5E:.=0_l4_F O$$܏\p|߼~u/jRvp?*ik&f5@E{I@s%r&)^*^|W;.$E p M1,V'g5ȋan~X )џz3TMX(3a5,TQ+g $.4U*Sͪ,P~!!A! 5)3Ի$чHt Yׄ4KMfTN;7!F>cp6әrZʄ2a^!X.xJĨ6xΨjgǫ%gW[[vu? &|R7/\Ydr@RacF1 )`)$X5E¢r'dݫBֽ,?^2kuJ9$32؏%$afiD$S,bѼ[cR;ĬW逰 RR/;^sXe3e#g{|0|St >^*L ˾Uo&к_lY:t횊QiG1ڛv-h5-;>ۗCwg#-ŗOb|)(a9U1: chD]yKoCo0J{]~ ڥv^oiR ³7Oڦwrhs> Fp5y۠m56%fiZiq-*舽hZKrjq1'A 3 rPHC K)FDOAs)[qP)m1 i\Rk",Ah)qGP:bR6.%OX7Ibx܅pQkϓn|ޣ)9D4G+JT XtS$03v NyLi3q_Sg[2M.m]KZyBGo$;٥b mJ wγ*f'PIOl`kfBSϘ8P=tL cajZ+I|y|ɷ9!sQR[L⾮mw#z1ߕ*C9 O8,s?qB;O ,pM=^rr{9|-0Us'̸5Z8v(*lE"csL(QmGsBtdIq[@))$1;09݁ % L)ftM8Hdd2b [l]iN|#b$):z) 3!· )Li-Pĝz'Rz)N^ʃP/C>bRpqEHފsGl48`B&z)C0!B W S띱Vc&ye4zl5ͭe6r iH2o8\G6}6t; ^Mr:t7 Wh%w 5Ϥ솮]xGY٧[t# "גVՀLc͘\L:z9&zj]T3"o:7mu{9B{n\hMq6h73;q8p#~d'^FDFrlGͱ@Z=! S sd0L? S |J: ӯp%_1l5>d F*z+ & *`cO4ͼi뉓OM&BRL*Iu3]^DɢDQDIxl\=8}{V"x $leY'3̲3HyD*EDQ+$P beȹ]Ph׬{^uCcj|mc [1ݗ]z^( c tlӋЋqǶPTևb;}V1Il?웂Z#kK8z?\IX0-Zs<ЫCrTNߓ] !:с)gR8NHHUQ d{ Ռ(r+Zv6HiYV.JǁahQ9/$t+cyDȚu=;hQn:>gNi>\m¾{ <}81-ʡX? ,ɬ\v"O1\" p]M\Ju;Kkp>)asjQY:/bN#p3`Yzni+-3~\5ǻY6꽛:Z -BdZz;\'^Np#t+%il5GcO1l&`*{AcL" i edY2tiDJ+' Y1q"J,NoVN9z#DKD Z31KA!, 8ʐ-*#d&D径Z dIzJpQMB%CXY 2 MRGRV*)/B%AH:X`D2*#+v5Le1 K")dlҏp(Xu*]=xȼ"# lqF@CDN"N`Y'n}pV/*1al2O2&"h C| AgZ7I ̀/R{=JL$ |=1\W>kY{%]\gm Π|$wva4!MI.c|VkR*n:> NA,,PtitByYG/0䓧VƒD4m1޴\/i9PeKg>dcDkI\.Dg:l"C-:>[9VJw{?̗/,[9"2g*WR6hCd,Z\P㩎>}+://JQ2Qz!ps tqNz']ߍ{I013)KVJI :-Ebk96Pxy) Rf>nӂG'fJcr9gH{D&KZ:!tJOȒl\pk3+n[OQ]S+['H[|)3o\#.c$r HXUXcZ5SGTZ4s!U Щ'{X6r˦a8c6dFv'vY^ċ/t:1}j\%MHF$sF3Z Y;n 8DR BfլϦ7QtdN]@Ε,rP /iyJ3n˃M'&_yӳsyӏ%SoߝZ\ J▔zOOBˏ+Cؗcк87Vx#WPFXKo<:?[9 `9?|_$ PVcs1>߶~E0 J7$F1o*DrH^ n_Wc| dE ?>"ԼJOC=,7~ n_IFxfr_u%j|>\%8/{t%t5jv Y |p R-f%^bw94ԝNvLSy&f20wԷd^0]71|O`U;p$Sk&h &,k/d&;7L_ qgީ_q;"j>¶[Qo^x.*`1tG,g4hXR ^䬂 A8@" rkS;ٯU_j h/@Hl[Wl.5D$auQxZ{drJФ^BRlMUQzc>&8}_5r5޸Tf;WSo?ɖ==c{Nhz…R`:#BYwHٛN V,dw ,s&ʎ#p\DUܱ |mf(\I9~<<^/o~ oMȊay5gr Е]뛦U l1یP cdN#[H}LI&d4)!5g=;b60gʨ7 <B3Ț: A*vZMk"aP9)t| 4ZB5 ܧj+dd YkQB̪yP AVJ+z"Tn%<28qϳ hO18 $ST1INn  TvPW#gse%9􉻭 }Jxo0kT-,ZdڶNxvűYGdDXG i]7i5,#2ˬRNygFJ+>L6>ڕCwk%FZ޵/**=SQ@kB<*i^:ܟ?ʃQYcL麲b~!a4ɽ{{wpafn.\_r~_+v^2>]l8dVK3KK9h~ f$N]w3a:PB.p^ldI[|V\GUƈ(xf-&9:q%,9*>/bܖJk/.>}hD~t K2Z)2;o1FKƣi63{VBqEK},_#RT!2SE9Y<9rAϰ2JDlkyWdG#tƬ׭{pfr@L?mRї}@4ge,ZF&ZǼӞJ̢%9\ fH[ͪJpt* :΍s{Lуw9 [+&IrdW#v zr<yMŹ'+_~)dyhѐQf,E~+, EGq_=yVwO߳A߯h)QG咯>+5k'o3ME)T7z# 7EeoGvHs?*gg"}m&NrH B/gBz%,}z>r@C2W$]'K/c"2&y9-_._E -cj{_iK6شݬ3L `lndl&AUuKryԶl56gn*XUq: ]lKHV>\O//FuZ?ͮr6; j^Ch/wo9 _;d هrGT˳lqzFX9vx1/fNĻwvLdVP<ĕ99DJ0ID;^;^s;^C;^;^ S#h ,BN+ÕS% cNFtԼP:_w[:f5 F'c4P魧`l'F9@ib$={KJՊ9UWk;ў _綺}X~S6޼>y X-{ ңΕy/qD݃t[3c@uu WPcQsvG5JyMi@!)qΕ2ބ-б(W CKE^3.%eܺH)e1m)*EO j M%+ 14=m$ڤ{b OpVEW}()ٻ8NP(nכ͏Uzc%$W{;l,m)Kf+=Oņ[Y\tg7AX30ڢ?lM]e7. &} rn."Nz 'Rn Q05u1RHBC ^E3TQȤ2XĞѥlæQ6Ƞ6dsXRUfmn*Jv4[07G.zUb˰ܼ EgS.N`V @t eA-FXDupT('|*,PyC|d*Q ,ȴB6 'Jc1>fDt0Kf&T'@&_@ܻ;YwiGQWDN:pJ 0=U,e+Ke ?0?ҴD 0RPK2#pEQaШTb?;FI)! S&(qx)1ZI6Lݳ f ߡ5:r>~>}4 BoOn%d?aNa?^eo$Hj!fNZ'MVGp5a02! \VSn>`ıW7D'@Jo SEbU!'ďeĈkȟB/ɢJsL՛McB O;u.~Ȯh/5C9ׇ᫏rn Ze:TLs}iT Mf%oak5 Ƅ~h~j7ՃgM.es~Mmݷ^mcs$L'-U3MK -55 .UöWö7),"洞L+~^U|:Xwhq_Z+#׺nZ5%iq=Gb t|XNMh11(ՌRyU҈*eKOL$=8ÄbP Mo#=fÚj֥ 5!t 3JxBt-M.Ut(JqMN0&BҦBWt> dtutA6)2BZBWZpB:OWPVM$K$Bua=]!]Y8h^Rn^^rrfgXa0X"4jkjسbۻ f<QpcM>ĻxcMn˸|"{< Bx_nz>ZVL9Д11U8/ iJW_mJk1B.U4j8e^xq{8Վ>fw+ZzW\#~Bbg;c1USCJ%I6BBBWOWҊLʄ s&++X*thZu}<]!Jc{:BJ(E+L.OFBvJ(CK04BϺ7(Pg?Pvp Kt(JP#9KIGBVBWVQuBc+#4!BR&CWWT rBNWƤ]`+ҡ+1Vb~(J NVUC'\IQ-]W7ŽQU3\P]!`]!\We%d1ҕM2 ]!Zy'.D'] M/;3 ;b fZlܢݖF$} Zt:9Q ̀mIiX=CvpܱZ+خi7^oxS!l"BR+D{(3JJz:FbP]`EѮ6bENW1P]!`++kJ]+DDOWGHWBjJRڻB'CWW'c "Z+NWQҕˊ t &p+Ot(kz:RJPF+u2tpN-ct(ywut:t`#5"tޑQ2ҕϜtjOWt( t 옿6Uj09B%Bkh撥t"M:fW:|;Z%Vy4遺Uc7$j0RINevT˨>QmZjL~;ZS; ZI}x65#Bh_kVY4Pұc=¢EС0$DWPd ]!Z-NWҲiiWXI ]!\-R+@:$ QJ7 sE+k+@+:]!JHW’.E2tp-K->]!J.z:B+!+X\KS+n;o "Jnz:BRV&+e a2B"cѪkWqE+CQ6!2h\d s2h5:]!JKz:BDRV;f,++H*th:]!Jӟ %]  :،P6$r 0÷^knUcbv&Aыj1t6H#j ֆxBd"H ,%RRZc4r& URB`9I%dPjAhvx`X8lhdDiAݵ^yʆOI5XMEh uVU@}~q YF]PӛZ[nʽr v]_~i(T/XYHrh mn<>9K |/w]uI?Cf"UOG7Eq@o׋~-Δ͓pX7+ۼn};x[ F˽x:z|LocXW拕`_UhQblO^] DGäPeԊZ8 њJ: %˦_P1wtb@r:&ՐsiԚ)c$p=ϋg eT!2i5@+ qLBR5{M\C6n拳  5bW=!ku:.'E?X}⓼GtaM~prn/~dsXRKM+oqFƔkU[G.[Fn Φ7BsVXÕ), G ,^Ԃׅ# E=qbR 7d>x9 *KLgk%W3"Z mZq@G! (r&hwl(0V-KF^3,zFYh(s" N$c Y@"!2[mX+*'@?-cFGW8'IBJIP+trFG%"\S6] 2jm Ae SUZbD8h*˜$J Ԯ!\罁w$ᔐ {qѠ((c&Q!"tV Q*P —Z^lDaRޠW3#ޥ tc^8\yβ7uIkt2Uؓ0鰚tϿU\J@'ULS7&Ϡȫ=Ëܷ_ t58HE׏m">%0a&x#:Ԡ#7f>B9a|ҊsCUu woFp*)(XDX*.#F܈qzdt5f\}߇.Bnd8>bjKԯ_jv'ŏʜDOU'lU kTtצY68OƄXӴt/.]Y/QF k|̣g=yos\l:U}R.hRb 3 KT 6E06iuW)BJ?zqzST? |滷^oO^2sߜ|xg?L Y%BK?&A{M7-E[Mc{4mthm+ڽo>vkgSnm@,~zUa_X/λUfD WA,+(6<6"L]^pk1cހ89A1"R6/9HϽ0ӂ {.y~^Z֡t(SJr77Zi>pO'8:De"E>h,KdoTridNk T=Vt*\vT:Ta>t5@\e@Qg#>hrVvi,s swNJf /#γ$U ^;p}q10>^-]lemXe84S=[6P^iH[g< gA9W,1, ='"ЄEIveʐkHIPDRI6#zj1!15\X5q6(74+e9pOo?[3[ջ,Qy&Dz* aVy%*ɷ#5rXӞFP:^ʅ&g=rw(Q+DP_ZZ Be%t"%bQHP"z8mdy&{MZލ#gkD&) AdD 4'PyAw Nv9",T% i!ĩ\Q[.JYŜ` j"A8m#&Ζ cB9\Ed*^8a)z9ajwX؅mE/e@Ր$DhdIhч`(ڮD-QDڪӼBS&a/c"МI4$ΰq6T umG5q6uMw`:8e}:^i 6.Wʼ>FM7v~Vz}Wo9/*Nntk5&:r+>o)WF2_j&ĉG_JIinv@_ y7%7(h 'mER MB@T!Oˇ0')ߡ 2y*K XqDGB>"Yhj/, !NO"-Rgt4fo<m)RaBVbJb虀MAXЉSҋdFo.B^} r%1X(a5:FDyVD$Saтqh>%;G.aw#L$*@N5>P\ȡyt^0 ?Bq"ڸɹdDdy+[}N)mW} A72\I"є%(Y 0w=14+s^־-cvyŠr -j((s8Ľpsk}65؜ɩAZXQ{}X2$X&<׻t/n=ag79-.6Un;Wl*Akxl**dvȦ%E=h୍c!TG`=[iAeAEڻoSX8; /tE>U*NaPL\#9CmwUy7 WIHYeSoRF}וMnUyf%qUݝ6Wrּ hMJ(5tѶ(Xfw2 zessU~Q24o:/h“9G6۪ ڲ|Uhp]B 4h$q#2z -jkl6T4s¸.֋][;5=yԁCv?Y)׵b[:9/;orlCMysaXT{Ep_KBDQX1r"N A&.pIs jN0kcp01@@0HEtNCTTqG.!HEYk\5,lM3e,4,|R,K6xEKN}ټ\]WaүV~St߫&sFa0hhr!E2=4H!u%E2..pq+O f9҂GMD2"9*N1r _Mwh[ӎ!k l~j%{ GP> ;D?L#En.WN`4P9R38 p,4κ~ GWGa1ȫ񓷮pp(zp>_&O.Cdz9b ;Kz}e䉵at*]Ԓ)QJ! VvVӁ[A__wm 4  ӐpՐϖYeinpav7?6~U;}Y5{Z48ߙ\U]lx㻢9TH{9;jcq=E|j^=8#gM%_g-DfSDK! UY-zepXSEf̦bR{tڋl(7J%]dQdA2jN+-S1_LCFPk3Å\]B}c[0g[{*܎Uym8bjd'5͞? VsZ`' 5gUsYm\q]$<>K}y_d Xl7y62J69dpR[[hDmhާ1ߪy##޹Q=6~ _ ._B b*DSnbR} @4hm9]-;~f,հ~Vn=kݧU_yz3bQ!$ REA<#|g''xd.Ox[$kmVESrеyCӦE6ӗjYRtIgڕV$ʖm%rr8o8ˌHIl<6N00a`V$uQZ`Ar;,VKxx)/M1h| K,uRv@)J GՊJ-#& S:Jg%@EBvaUyY<hrY+zyj-qN"jVD2a j Z OْԆO jt Q=q1c?pZFֽ:O*1>z"Wp*9Opr˔N`\_>ɜ\_;/߿ ^rw>v\v{WA4hm ?5I@ՠ_3L-N]al Rek?aRM@9_ d^ѱnodФs/ l{O̼QᯉQߧeѩϯ̻`yĎ&'!ܠngnyZftAJݼn.vtg?!G&ɟP`; 1wa-`bIN6>D3n;'tL5:ONHV"8jCqQEָbcq:[{*w7.]9o:l_l?k]Ff|z0q,<['\o;Y>7b Bq%%61h;&BMO[b[ 0u?6E Ap$UADu˚UkΠsjoؤ J2μӮaHDa7Wox1\pQ$-Pqŕ2Aj Nyg YfDcMz9>Z9}*SK-x0|gKj)`Ɣ+`jǸ8J 6/Ax3TEC0^+B/qg8LJ;^>o7[4L\4L?[z˪RM{R~o%RJVMQzL򍤊Z9Sx%q8(R_6vC&} >Rw;;^Jkj~~{ypɻr/8Z7?3٘q󷯚'ݲ >ZY-` 0GM,aTsc%bDa֊9Gۍ C!fH |K @SJx->cm@9+, ʧPԠKFRamoRk>n?.ckZu5c֎I;”!}@' LD.R2Q+ןLT\k?IB 9Î;pŤ(6 6#*(`F t>]D ȱ]?bO=PZ8Zio\p3zDsj8ph6GTvi-Mʫ?l VQۅBB,ZXJL!0"}Vq0j/ Q  sI s RBGpDqec̱,uL H}#5jܹpVgle̢CphJ3#=\aVjQyE; 9QKޞIW54iM$CÅm}%-v ެo3y$|XA/Ȧ<]*fBqXt_DrY,C"Ncʠ2yRV囌uvtd\(wpu O=cF@X1)j$>ے۱%'`F^~OzbKIo=QRאK|]nUGRR}-WV1C9E- 1HS0ea$fBpM%x_icwueKǢ}; eƩ:$OA#4&:KAJXiltG,"TUհ@7eh5w.\VcPcf5vY4>Ja>h⃉N2}(މZ)=;Qy &qpi'%Sv l 7KIEu߯(]M㛕Ў!3Rγ7_ZJ]hN?B8wa-Mf󣙘ŖpA4u2hk ʍNa>hiO-A9 }01_a|9bdgpc:h ,of}Wxsͱo`췉\ej%"&*W2 9UfA9UfNSeT9Uf6VJМ{.kF9\=ss9\=5hZB#Esle3T=ss9\=ss9\=fHss9\=ss9\=U=A!dz =xIÇ8,s?qB;O ,pMz܎=O.9أb/h˜ nZкj]Zo^g|K6+6e-[vW`e-7CJ#ŘyжogYu~KUkn;(my kï^[bSdzhDˏٜgҡLo@nٔHDI cmR})R&}]F_)R|n}g_ś'w{ڷL c(ʀC@ H 82IL"HP@2Heȝk>ׁ>(L94f 86kËnYa̴3djȿ~@%HS%\D4},:xn6g$ MAm[,uC;+6~>`8< @ 4ˌHIl<6N00a`V1FiAz8DJoz%j 7)8Mޝoa f$-n(+xْԆO jt Q=u%_^e<1ϱ^;중k,!8E 2>1 [ &p.2D?JE]wWnoMǓr~ɜ\_;/߿ ^rw>d. 6KjZ_0{~64Ő-]y4j*yl0' 􊎝v{N&'&ua{}V{` MZWϧ>LG.,N~f%FG#vmL`49es?w{(/SPɹM.N'$"? w*-;9b*5'Aݩº[\/1< qIN6>D3n;'tL:OtMRJDGmEv 'V[}M34 /f IiӋx-x5 GݏMqqW wLRSGyA[l6 ? x&USB=sePŝy]KLh;54S~">*m"i &b$+7I8ܵLd~ys6_6 8a՞4R4c;ʯbϋFF8Vn#%YfX/D>;cY"J1x(o8IDQJhǞm|a5 +uhqDkX_Y3+jyڒ)A!dlv51zuECxYK oY E_KWh oq-TӨ?O(:[uŪ<)Rq}ם3Qu2$%!b18>Q>D7J[y`259Zᩈ_v$ۉ7+m b'Gi[090F O9VN`*_BNU4))Xw[gك}?hG 4HT)~Apa* 6Pv%[aҶVbM"HiZc-r?jloTC0|U9/Ca\HXTXQzLLj{}`cU7kn:anUef3 bHh)P"2*Z'g B*<1SJg2hgY%91l&Ξk:+n9k$3?:Y]ޯCӳ^zt6$ٔ Re^:` "8HFȠ#+s҉mQEDDފy?V{n⛣kbL`Mk*-h%diq,ٳ"dtA/t,/C,:egk*cHڠ2d%6)WƢ2J&(.KR%)&$~5jZ[>|:Y D:is@BdЂAFv1HHmsB\_O @)=ZvX1"TN +3*zDw'4 >l^Nk?y ߚJ`Oؔz/0/';?K&ʏWv:1x&?왖%T|Y8>y=l5T OH%BF)≟Y8:/S׭%#G^U% YEE Fa)).kq4o ʳE_}=LOO5wQsyV?x >n#̙c:+ӷsn9G&}+^~=ͳ1S#{]a`m/_r9;`źN=sz6]prרUkՋ:Rּ|2[yTB}vS6mylyyx9ћ_߿_Oo~7ͫ|旟~/<yS8%Epೡjho="ghjԫ c\Z>q?Wٛq{2;-W ^O~<֪TW3G=h5g?M;N S?oÜe]BgASRiD Ll'a0W(0[bP VR$JAB945]^ƬKeɣQXe+q>o IrZ'tz!fueV< ֞ihCXW$^9-PP+${X0/IdFlGpx!Ȩ)zl Q(Ŏf2 ,*γF`%5(g x)iabc= lC$q '_t!~M)֫F0:g4uHދ7IWx/Ku8y<ċ:6 17 t;Thksz3ht>; $$4MO4^wZFFD-./NKKu] k.b:;@dzǜf}q[ѠHjwAC\ѥ"2 pYHƝ*d_\2Jc!Ҧ >P&9nVIe 8io_ux4=I&d'6)qW'I}~׫_fgf;wES?>}Ok Ctڸm8[S=|JGHD jgTr>zez.%YZK.fXr)lЀji8i-g޽]y2b;1RG21 RhE#;RfvfPk ]x x_s%ɪz..6[Chl0{x SjgWߵe?nFчk7k?Q6lxqms۹,w] #ղW:M: Iu@)|!!ipmŀaB"ZkRV i- $dyPB1jkS̸єoqZՠ`.JR, iB0EBEsٓʌ򄉂/5F32]\q5GwxmE!8Kg6;]jz7vtRpZ\?-=Xgm. 9LɅ!dռu$*LL ylx`Q>֑͊ ms GcyX7kUehW[LwŇQf _Qgq IXor[F4ܕ%}F,0y6U\4%ѸJkDcwhY|&f.lUbXZ5tsUD1\yN[px@izr嬖r2rj MHrӬw'B\/m7# @--"@;w ,ր !RCEc`}OV7[$m@/U'Nd,Mͤ ?qW:Ԛe~xFP1i<*i뺂]5 CΕ_1構z7J_Z=yJƴl(>Ocj7>{.(nSR '2];3/N?͏E UAaaBz}jty۬Jf&0K¬):Tr?fGyQ~|EQFoQ$_KPQ `b}=ZNָLkf&; Yl;|)O[Wr2Wo>?ֈrI ̮R8ST %&Eՙ@UXh! &Z}z-z_̈́b alJY olm@Uk% Rl>bgk0pf/W2dYZH[5]\+Qޤ CdJ C4>劁FT5ZE3e) ?ں{x/dp &0BA@O>ճØMʤҖt (R$S2!۪ˣ).:&UY:QS3VcmHDE*Qh:xѬȵ STBf&}R2)0I AJNj} qu$~J!uĀ,☵%)hJcM`$Z=CB6 5U[ल8D1"zҼuQ\2M*\ J{+[lڕ@ƨ%gE2juALڳRȦB*Q7W*KI!oUH%CLxeP`Sk\Bn x$*Bzpm^jvVqI <&B`yUɖޕȠ-]̀8Fl5iX@Y'E)v!XAIiZb{u)m'l-t*x4@XIa2,3*x [w#LQ4F)k]% hW$;6bAPwSA 0`(SP|KPNilߦ_`PgESrЌ4k e%L֔I ݄>T/ō˜lq9PI1xS@ŸKY0i3Xx[&- pX6g )p0GRF5U \I{v35Vkx5=))Eؗ yDSՀBM2:-dA ՔFdYl$DozҰ WmFՊXkuDu-nO`u/&("fcDC_;( (}o2Ͱug=4mKF#閇 HМ6" dZ@X8pi3ҧMg%$J-G @˥W40w<"'qcɡ2:/5(jITx$2LbyUF|pцLƍ5c4-G Arѡ^F=حνx3(nxA]K7MdA*X?y*D5d٠.I|ǍU6}{6G80r%&@JY2 cC9C&ؚKd 7+ =DP5$YxF(m$GoET^ "nƂZ7i[A AU M{-{0`J-ShӸ㠭nst>.w iO狣\+H] S4 DQ#@3 58 La7M\;iVSEƬT-GQ"e%64)#zl|4(4#BvCPD6PM%TQer# Bk79vzҠD˱]gd*2(eBAj*,M=<^0fA= i^/T( + IJS!& @- Rexy aԟP(ˢ#HmR5:#!,Ƴ:P)ZsOK6"\ 1uBΩUn:#r0L?$Xf=P(-g/IW'Q FZP欭O֢~\xZ-秝4S9cFaJH؝4z 6 =dׅ֞Sz';S0]C7_9ed$`VMv|ƇP$`ЅKOq%GD LBָrj#\ךtECPޙ"`B`j?ʭzob7-g[{m{Df@63əE`*ml8ݻ[r N+f͸23:YR|M&rs}K5CU&v@ wG (TaQ,\|J 5b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=_%P'"a@}BvG svF k&PPB9*1Ҳ@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X |@6HwI s%%3J X+l"9*|0b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V=c%PS]R`Vrw@0ȝQZ랼VzJg™ۏ0c%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V菣{z~wԗVՂoݹr\wGg 3zK`;GusRv%X`s.]z#p:O?"Ǔxs0xk>?Onŗ٦-0E<kU{5i RvFɕ•RmKn1՚v 6\#ܮ`sVJ2]fXlF *a{ {ϯ6V}/7҂WrR$ XtF:T5z/T/"u1D3Y>.ϾA'N\IOØoG`AMzlT{8nb%|wooYQ؞ϣ/l،' K][} U5BJ>4 dJ1im>8ì~tg?*hSx$k)C>b]cGuKW30?_ǽN NshmO^(o~1EsqW/;58PK%Pp3-n1}zV'y1Yr0PdIns=Lzvd=4s*%Θ"-yq zNG燇W1|Lͼ(iu\^6Oa=Gr<-ү{Ee3\U3ԏLp5hJp[fi?%'\06k+$$$ 8FMTZTjiD:cZNeAsp/<_3%ԫ?{# / j|2\s~m__:Kn<p+/j\=+-֗eu[bE*+0>7H(MijhR%K4VDuRݤ2pX̜cepؤ?ӂ99h966LU>{]ftc˩ۊrmVIڇ_Иv/ G0^%Wk H:(VdT6D KA.@ 4*jψimq8f8ђ*2"Z3`ugRlX%/̦Bzmq.U'$e=$.tq žC8LTY=6U42x )3> K(In IK!#a:0-b"rReٺc7B̮xlq֙v`2GCBh2q@ic" NT-Bh1'R iA D _r&=3 Ҙ0[wv2Or1v#fӏ="Ay=b-Glx #$ ʢ1ʃGbDD< (@zimKV01VA=b>8/jR3qf{m0eDCp$v+`xSU  )P//vfӎC!a.lEYV([;'1 ٕ%WTeeEp}T?ȓ'ķnr1uoK89[η|_ "1X^#B23 uN'+M E*XтPUڎUs}vm:^V};RA}9B6cRJKDi%vTc%A l%uqd84@ɛ ͱJD@Y=ƥ~R>[az5NviIB-8zHS%\D4m%.F1nx,A&og3DzptV.9;/1Fb1)KaPV$uQZ`A}/Ool=f״$ lwM576K+}.f N-wŽhRK)}DFH !HŔg FQPQhUpr; sUtY}їƺ-Zlv_8$0SOE#Ah: /pɀFEb>qWagn(޻CV>wͫ@@e pAqlqگ?}׺ ?w&R"/`FB x<5!gMFnz߫Z1wm3>V?XR3p gpW2?T{/wӕ3.m,ZFR_ ~.j1??~ W.۴j<>.Mp1~j8lSRӺnx3 w|lt'4ijO$Dw66-;Pk_ V\17b0Ƞٻ? ïheޢp񗤨xzS;Hg +F7z=Re<5]"hQqLI?lyFxri j 1wg1j}L,hsۛq/eg(U }*MD;l[t{eg$rD\aa'ͨTũ!eLTQU58R߿}g?ϧƅzovЈ>|{~H{IJHl/ߖ߽p -ӻ?AZE.%^*^#^oO81] nMfIYl&dT\B%ɥx/lU2#wN6lQ$m.9);d9ޓ.`:~nGᑻTs.iZ^PjzZ 6~?%o^'SwdhdoXxhS'r9C/<7Q$zH@1Wػ#aJ)F#4r}?7ewޭZV^-k Ei_Z W0lLrU;ƝQrny-e$S*ueqzI](aQ;AśM7FdFN'vW0o^{N8<$XgjɸFLf`@RE)`B G Vi?JV,Y</I7Fѓ),6㓤8rGgR|k@OM*ۨ[,_{RcCb3rB" 1A}ptA57V"FMsFV;|=IN\;AOCLcCW a4*?*Q((+8X%bRc6(SX8Sc֝͜K&3K/iM _i|j `}il?12kCԮ Z:XPt}m~_' ?j} .i4e_ϯ1_}h1Юǃm^ ii <:<+ũ< ;Ϧ^J<<+J-0\C7->f C,`Rb [a5ݬy`J[pp*<Ԛ0w3KeZ A!fĕ_˒|,bx +5%+|F}}Nv-DS*h1)q+D \+ 1`IoY6LW4lRJf[E;3_k(v-WJp)*⤣7`(,#rR:TTQ`e,:uqE@G"` .,2>yS`<)L} ZGM#j$3XW{ R%`;{^}cK7b|Wfv{={%'S~=|%X<)Տ sKI T0dL3qF*@HRv^K!x);/CZ,%F3ʥH[B!al{+!р5q\ WMVx)C0!B [X1c2b=6MVHKDfx;{2ԫ;,/fwɰ f{w+Cͼ⮩wL fwGEOR[_Nڵj?Իbj)nN^DOuFFH[#.)v}$q]!-l2"oyv7= .n_3Z5Aʃ<4zij.t]+ӗ/h jFn}l:Ĕag)k7Ep-b``=++ ؽ;[fJ{J[$il^RmV&<;:G}`%=(7*v7nzC0]o#WH}T_7qx!""e S%#cv%pzuuO٨qQmп]f9fߚbYe8cG%=2RG&W@j91^#Sh)Kz( Q;Zσfru3M˯M*:2ԣ)9*zuM9CB3ý{yyAxT C*2uPފȘ'2 612^ 4:) o$ Yf!O..v|n^~r\X?/VwVz꾭,SJL1_ɼC\_ ZJSk_U2pW#d4{^[fML&JtDtsGUp8*N/Z3O1gv2ܺ(ڮ֖p!i 1hU#)I4uVx´T(T6'HL(Em18(|q6ըq. 4)c-3r6.`r8S25n)ɫ+.o~yɇx$G1_zme]7#7S%GaA,2`EjHDYd`,jV:~PuƝ]|*YV\ϔL)j[hTBB!J.(Oz=ⶕǖGql _"uFXy.q_+T6Z˥!h5" K5OKUs3_jkDD ! 8*|2C @: \܎1A$fY,ԭ +JA8%$xuqyf@ IA=D Jk$V)=V,V-2&Cȶ=]3/ٚϼ\Z9'/ӟ%-f'؉Njo5k] ('9w9~cunGWKf~|h B_R5 |]('irFC ;dyrŽ.\rd&^Q3A8CSϮגD UGQDIt6#P($ Kr ?5F<GL>0!zěҶ ._ONG5 1Fy7 ?(WZԮ׸OΖjT5Z?[Wo/|{=9}p'PfA̅}0JfnW#h|F]7W! '5ً2>_&VTe8GO{~o?3?N>~-870>Wn\G$g?_?n>Cs o0]Ny͸1{1nkbmOՏ׿; ߜ/.s6-]s9{r_AQ/;GT.UF ZS@4 YJxhZSaVQ6D{16KA&Izh*a Ot$z,q@1ilNp]lv6!nCr8pI!1GЁ3O $E7*"e#PpՈ΀tJ$컶 ~x▟le@Y9d{vT;[m;'s;ڌ;;U%muvЎ:, - .'9ŭ 9 G⼃䝠odʹ5XjgIā)Q//RtHb")N*l.c! ZιԨAE8O4MtwQw+ң =Rs}eԾ$rn ~x,>!@0cogr9@pVTJSR6=L+$WvWZupT* \E)AWTUSu[r= MT+dI."cP4o:{u5Lo 6Q}КOB)0ۻ;W_jדpղ{Qp6? $!LUϒƣ\_hf `Yw>ˣrBq*m\ٹWW͗%5s͙jDO*',P;t2:עAYQyMSd2$;"kЯ7lEyA =f\n͙ZydRoU&XU&оR T:\e*yQ%"\Ia<` &WH- hWo&WH6 *)12y:VrK \P'c0ܣB&W& 2~YJY"\:tQSVC #E(e \żU`#jFjsәJ LE6Vyhܛ0luwVka#ԛ' M, Ѹ @ʴvhb*C=E0d>ܛPµ8%R̽h)<1ԉ <0kՊӢD.V; $Rs] ؐjF)x1n@(++nY, JPkd5pj5e+a`E8(E^; a;r+g!lGkڊJIaY4| fWgLZĝenͻ}[ dӼ|ZNm=ںb5GP,]歹ZȲ DG T9q|Zƺ5!jP< 1dwd ?ܣK,kvY_OK_6yIaKݍqJ#AXyyl$-タn/0bdY +KgYt.KgҾt.nJgY,]:Kҥt,]:Kҥt,]:Kҥt,]:Kҥt,]:KN,]:Kҥ|۵W"m) t.KgYt.KgYm> ϓ0ճH"ՊXCָ{Ԃr\jjEB>fqh9TNNfe8*+$c :8}p7TDb,')"9T`OW16H W %)Ĩ5t9/XU.F;V:#inY@3$P&* Jbh|p-Z)ptB>H* ERGR`C,aPQK$H$Rt JPEfޑ4ԻQ:-1NkNpcN3A!9e\dm2,D:#:!$;աf0ucKD>Z$܋A0A4ZZco-JL("c$cWqSƼ=FBh[Π|%wt7Nבj3ܤS]¿-쳻s>L\m'd~Z2мT3V9|)vvᅈY1pAb"wr0;c$:'-uڭU='v+rPrI{w5:+EzZ@w'pyU/_Cl󮎪%8߽ϯ[<2Iʴ?Mt2 7ϙj[jɜ;eu(mX:,PzG][0Ԋr6]'+hHmT*M1=* 7e3od+jk5܎J^>ŲHR+ z6LkجvS˦OSNXdHOXq|ϑce-ӅP>UJp!Z-8 8 0gϡrP6"7Ki\hc&NScp@ :g(@,J@mrpTɎC坑aQ>`54h9.h !y#88*N)}ǂ9O j|զuj76xys(g ;㓝@ Pi}mkBw ȓ8o}OǸ{z_2(&!j3^44(^I㩚'א=2V8YM.$ ҇&10L>&xjWkxAdacDYo.VDGAܗn%`0`H8ÓhEm@&ŎU ;+h &'pt~iy^ xHT1bWɯ7jڼڧyyFo&s߯ɦ|󃶡m*kmLسmrWNky[jm6{pb1d|27k|ruC[9D) V(-]lR=cK4bd6j2u9~zlt8񷶨'[cvvտR\#-J67gg!8˙EBsh~nKzVǩ̸̐لFzn;[Ƴms;]vMvsMjJ,[Ps#1<'7x3-^nR7xr)fYوss %%:Q,-䢃^,K 8+3TkR? " "Y"-O刡Bft{'$Wؔ #VAxCc 29,2r,&I|"{OY__uXqhmY17:Eb xC~Izޝ;ރ٧ٻ^6B;HÍ0˗L|2Jh^%e|%yO5LHZWDT.j[9  U͠h6(CLг*>ʗQJEs^kOxKd|8Wy-8b4e["ur)\ІHoyW q\ԖIѲ:E`Xz[3'~ Vk~P;<9Pgg6u>~BX<ɉX& ڟ0Z4qQE-wlqEF;%`"~qt;ޙz5 .OG 1w8sB&¹<g;=2_Th8~B2TsW5%]HUW) #XO/AuoPNBrՇ%ɧf'r)FnYO߮/C=8dFx42b~4Z6>O^2O[⁲"k7wFwDwk%xhvÕvM>laxași?n|uVL8vD+Ǚ$%Loш0k~Zym% 'I$ w6 p\Ei3!yE\\2a݈Ӝ35!w|֞'svvѴ&h6mMB3$՜8jIũVؙo-iHlEo^JWs7jRK&)/`]7oNo@kv<}#b6KeS/ />jpɃ0p!?:"!VBS9L`s*A*{r[/F J=>XNxFLqLLGiOa%%?q>'.E؎Z/}cxv+汭r=fꝈIz#xaC~w_^qT2ѻ yYX匕 0i_M~˛>)*[\S*ᓉRy(e*.@Z$4@Nidm]q.@.Gޚ$IQC 9/SRJ]!+q -*齴ȫ$ub lO8n`@1OD1$ U'e ߤ2%b`c{T@̼U$с^^K9z:D/U'eɒ9P2!i] '#155iXߊ)٢Rhc"1M4Wcp@ :g(@,J@mrpk1Y^3g&M%伍54/^.(7[{4y])5LivG,ӓyݦ;>ҝմŮ[ٺf"셞G3Om/^Ϯg_֧gK lya:lioytw2|dvw4@ʃoٕKD$&Tw*(T i\qp5v?`xDaL-&GDKK2$^E]G+g|bsJQW'ovwT {|lq^A\LM# Z2a ˔wG5yIC(K7T$52J1hښ"uI{ST)ƄJ#9GF q6ގѵѵYS,<_bppU^YPp|6.B:+#3@=b\e+gJ \e_lU֘Z lW:;Y 67QZ-WBzpeOIh0c ֚΋A$r p 2je}`5> h}eU+K gwK2DW'_wd85Ӽė|GT_21G" ʐ o~߆ _7J>qpVҤir=ﱃ,Y?b&Ve Aã;+kxUK9x+?ԋV۱ϸ|7]_]ej!J!/(Q:mf+PVTFz^Ѩ)'=;$'rIOd׼f.W OV+4+o'xáRk2υl+4P֚X\b]@o*X_*[ pl7WqFU*Io D\wp OpKv΋lV\AHzW`zW\}+VA*[)[+ɹ"Gp 67pjJWZ]l[+ŕ}+4؈U6W՞5]) \A@ E|W\][o#r+BrMX b8IΞ< ڊe#ɞ).lܖ[\ 92jF RRYͧ7q }\DhEx`~y2ӧ Zg aNҠU2W20[yRx̫V7RU~*\ӬF‘&ˮ{^ne՟߇Ġqъrm|/.F`Xo)> FYeԮc٦R< M\kAU~6=O7p,}=fn{?>?>|]帋u{8hçmoa6jv|o2J ]L%ʃpLh)Y=5OXϙ/BVĒ,9҃NjD<3ʉ9N\ *(u,X i6j D peDv :w֝^M[>ۍAU8%U =b6~kgL\^cF:g4By "\&f&9F8f o'»Dok94|7o6/b2x-e}NUfjCM*Wo|*Sn|.)њCtHfa*BY(c"0f iW 9n|O9c'd1g:6tCz '=4?.ޮ&*KW~҅ꕙQ7O5RZ/Ei:dzgVFC6kԲՒ[ۓ7O(?4~2卖a:졷z06Lq皛b]'w ]]'٘PVV2Y`U^G;PǢ>s*4NFc'yAT朶N<=&LbVяC9U>tZ/sChŚ߹;/.]oJQՇyTW113Ǜ~' :ӍΦ"%.&Bf PԐq<:#g癈yԦ!8^#E#^~\vÏ,tN1dAGƉY!hImY{Ar3|;{ [wΛOʒɇWB!1j_soaOm13lRJHc.xA"Ȅ1˴ ޤexQ<2QiR"Ai4c W038>YHOy:<0*|(!wȁ0\%kʑN!E,$Q35s,@˘9:Ica݆ 2t zS/E][vC|W;^B^6zJ,d?EHJ* :Y ǘ 0َdn:z5r5T^Miˮ .;չ't=s#(~3V(.8)ܲ_Ӹ;~$2jvX ^lhL#u{ uhCe+\0ˣW6Pn JH6yъ=nC$/:kfw,ߎ24m2Gemh7T]!^EoEw4RR&ZQlFsI5|2C>Q,'77EBl~)A vk~ wBȞXoBޅj@`2N1O#k6jq'W'ipIzqDXݎGG ] Ks3Dܤy+`lx.-Wn *E,1i=r-W$^;ݺY6 ҉.<Pf=E&!W?>kٓ4Y%XBAc\JSL މ)/,7Ɛ!BF8g1$ج@?z8>V$$ЄDj`yֈ-A$@JH^'Jp`,oo^M;-W7kLvzt}SW$J;“U#CtxO ;Sa څK`KC'G/K[˞yɫdD]9e%RhXT.>w/e<Ѣ^-]0w`#1'\cFy|\ 23 d  .ľE$5kWl mg ree_X4x2L2 -e΢cr4GY&5Hk6d"xgU+kw 3bparr\k!0(p$T90ױw֝%x'r\2gVJ)5޲|RpLwnY֪p6/. ۬& +y ݭJ]\d41b y$`ƎeBړ0!E#2'HD$cDT.'Ҙ$N2/룈ݞ섍2h qBwc#c 謏Q5I, 1;mRk r7+`1 Xf75]ȫ.`c%T+w«`$ozZЪȴJ +)LP9l6\A '}*p` @;/EWkT9sDucG#mY+fw0J3ɹSRnob@#PkRBRI!I$jꚏ-cYwZTɀ͵^9b#}d*R;*o^v7~Ş\ʣIeMgMт1y9$bP2z歈AtdHk4O>qI_ޱIB`<G[1|*ng5ka*UZ# ZdLecKukH,z>H$:Ry 4NN)kG$#* 9\rYGqC@q=ƁvvpftGg9)4_Xr*9(Kޕk$h=?z 9$bzDL_]ȅvI)FcPS)GL!DR.X0VVKdek8xqt@4hXӖk  S2'pztT, *3˔sJlJtԡ[- ex)[IAɌloSRI/`HǤrIUAR#)J cPpø3` bQyZuN Yr uөHBH51+AƍeRtYFS[pq2gP&(a/RNE$1`h'3J`WgdC26Isg!dtآm5 ǁsȉnBȝaGa,p8^,:+!Dlry  ڨ,XC b2Y&Fݝvps3oL}qlnpZsyn7勖 pzG,StTw["vɍ4ᣱ&BA>i୍c!TĜ7~а֛5 :ݜ_ާjOBC4fW@15n'Uu0 w7]S?7_W~}[wCCՙWɽb^NY:/\4hԚr6{S׶y6[R=JŬ>|<_W8ySw|7frZ"3 ›j\64"B0n'`fkWǬCDf&"`0냡 y=1%7 Ζ:PrZhEtJ.% 0ڔ ($F39RYL*|T N;)K}?:]oQf"JE`%:KB>:?e'Yd BPb8lQAMh0 U ن$P 4$AH4FPeZ j18 j:4~I\l/t%:ٷ.\rv;l1^wnLݾ&gM |k+tw7y_r1#1'XX̅^\9Iџ*\fKEQc#l&fNPO XAlA&]UQ*@Rftb\Xle˅0  WoY>`ve4, Un;?:=}='fl F>""r+ӳicISx$lQ9Fl{ {."Wx&W:(ᕈY163a*IЦtW)835+]luڶ0kہ NVxJ0bRVL4")S`z@+,d4;-ˇY*T1X&&Kc¼e$]2χQrVxˆǞHYF\o`āwe}OL,S%IT&A3s*uZ#Hٺ8I -XFҒJaZ`I3TP^Js?#^O0xqZ/KgVɾH "xq;+E$</H 5!D8Ŝ`A ?(3‡VǾ| !ۏV)v2٪Z\WY2wh[8 ޏ(6ŧq\&wTkaVJP4X2X#cYm4*֞^3ΨQ$ Gq㎐seIn8Bi9D)I6pK I%u PyAEuMM'L0L7)d)sjW3Jk`*Sڜ`@#*\&f] NKGȉw\|%G#"(Lf;}1=Ht1l&m'tKz R"F=P] XFiQ!X\@M"4L0Y ڇL q3OR8NPJ'-~!^i] ]ˤ0ty?C^GM{-G a4Xo,nlg]5V7U,}bhz @Xa It ~+ ]!\BWVT]#]o]Im&+kT_ RJNWӁ̼1=+ {ko *Bk @WCWZ" >\q0j`ןS84'/4u~~^5>^T{31i5?b^?ƣDlMpu4B;Y[xM1QƋ].1G8υnn&ri` !?500[YufϪIn?vZ]0nu[b! g#I.RIn_5QZ -}m[XVT@ 1gJ׎6jV +}" o/0oOsX|Mnچ)gK1k%e§\g$1GkuU U [ aǨyzDW"/th:]!ʵÅ$T\ۯrsV7R/ij p4]LTDb_kUy)J8É`\ooV 97iDYiQ(iryˤj_|^9*7ƚOC܃Q9xU]A^;le)'TX^ :-½p,\,t3A7Nv6S\X'=0χ|*0?0wkT?հ; azDW/th:]!Jk:B?řl9 ?Jµ/thu+Di@WGHW(I K&{CWFJTl#+}:Z5}E0LvJQ&zuv%]!\nBWVd#+LAh kB7 &0a7]#]) ;$nkhݛ.?ZIMV{%ͤCU01ט0m>_kyfhYX6 m }\=V(\{ nY GEyZ$gSm*sX=NTZbk/3X6yL}_K9#9i:[6ES/۷ }}{B +ZVqmhPjQKv30Ԉ!1t{`(gk7v_`j-:@8Zt̗m|Ta *|5ZigkY(1ϠEjzZ(ӍĢ1{*,[]0N&J~A9j[\^Q}X|@b6* TjA S\ *Ink^`7lZKhP[!R&AcS}p-pG5՘5l1?]ђLfYڋE1/pq1; woBk߃o~hGג\7Z`ЍS+η}t^絖|gu[lۼm0KK-{5\P[ M#B:|C7J'{Don-.}Dϕna(qFqf=5ٔūd6jI%V Nbmu09,N;~M6aΗR %&N귧rPC3Oy kCz=0owce:ܒjV3~bRYaX6]; meinLnx h Tg>wveE= -{dC;@?Ӄ%Sf$=+h rBv!]!]1NuP•Ѯv(]!]q.v]!\+BWVPuBL tut%27tpYoADu$+Dftu;HA aT8eU3$EɢDR#s4J̰랪9$r }-6cIEV%H7%{(g23M Et@]b2|FjT2-:m5hZdaUM<ʂ4; i(G?̾wGPx_iWObVw03tp ]ZAD Qntt)K]!`+;CWQBP=]=AI:DWCtpձ=^䤧'HWB+JDWtBW˶3Jj+(]!YWW~թUt(mjt 3KvlXg XW})ҕ6R.YWX "ܯh Bt(7 잮]cR>j8V> Q"]adEV./P< ƙb=m.o `k;ΰ4,(Y)>Nj mqRf4UWt校DVIѕX:y>oY rw;R+;^\scqJ2ޣ93}Dt9+{윩*vBtE]`M]!Zz Q*+<ݡ+ktW RbNW ]I,+|Y{:CU QZ+$Ct.c]+D+ZOWR ]ifd 0eg 2B=]=A2N3tp ]!ZA@)ijteln V3tp ]!ZzgQ8]IuIQ(q5VNݺof[o,v$KZ}<$ $Cxy|3 Ӟhq;2$l͆C^=T!BFw]+@km;]!Jzztń0vU3tpug IH+DzztŅ5Kt}~p-]+DZOW۞ ] WuVJWw.د]+DY PJ&{zteºWy__N࿡ZlP| M6_q},(i4?v@A:Zs{;x y̝+_DpFJH͉F%ցTSlWqw7 "?&g]utya%eCQ9sy2pZ-W}Lg/ħ^;wρxԤ[kζUöWö7hSX^4U=l:\utyUlq\mk]5r[*ܐ2yZV,A|ͫ<2 fyqBt (-*օA87/_} ~߼~Oy^ q=H6j꿿qhjoU57bM^vbjzTu3;!R ~4a/_'P.M&Q<1\H8lPڤ,BMTVoF8?YvbV Uoz(lee:{ _]&#HH>3&"4[X*LԖ 1|4,s ,bE0ټvaEyE=)" :;~Z6 )SCJ׸6m|~gev5_Nqp8D*m\f.zK#3:9 E{+Rs:iqq@Dn} ĵ81~60 2Sk9AW{3ӏ bm=Hٍ׬my 7VORj2*YoZam>T3DgIGt6HPbI<G:ۨ&Ae0Mh@!l8.) IxP8TyĄ18+j0`y]_av~ٖ,,uh,C`S/+uKPwwpdrQmZ=%a^ݬ\r+1#80'X{ltA s)\fK22:ǘ'F:L̸'z)`قN )YR)\js?36g4ƅIơ\hBsrmq꛸<ߟG]^a1)w~@&ɸ\?gl F>**r/ӳ icISx$zBwlM0< t\D3MtP+5|;blf&aU)M Ml7\̶vcq(kۆY[w0'1*eAD:!J#2x Va:JRXvFk,f R2/%F51Xb&(0%,# T'\|rqu|c[18i3}S) fx#D̜Df>`Ā>,5I&:PFJ|$6+4@`yG %`ZI3T0^^iWb\mu6&%"miϋ=/޹hJ23YpQrH̊S9_A@whuϋmƤP>d !;ֻ^VyY.Y5Fsi[{ mFQ?g?+Qp[s `k:sX fRҕRcRdnX<,E*KIڳOb9g5/\MSZ)!Bq   ä)UqF"Azn>om֝SBZtN iG޶ZkϹ&xn!ɳq"$qh$psH`A Ņ"eN g-=SΧ`)%*\&@@cpv۞{ l~lS=|[c6ƐwH_=3|Y 8`].ػ5G8beKZnۭ݃yxd}U5 :_zcsz p<ؕMSPrΚH)'䤻@W%%1*7tcfBa9YF 0%\xn<b";];Ioa`ǚ[۝A;8@9[*Iyvg߇30 J%'+/4gOB$>T= }z`,B9,2k1.+A$m"2{׼%LNnOsk? v'o/zg8`L6rL{ykR`WDܥ\"E!d RpX'LR .j)9L;n13PԄB`9;0k/#=J"C;>Z}s5VH>IAzay>4>H&&X)*>u wﺿTBm6 Wn\m.^AB.RQZ/E㼞=]Ε4eE-ew>7dC+-wdݯ ewMtn{Q2QHwKs9eӮN-ZosHw vp'%zW\C^.2br*)TDU`uD 007)~Yō\v4!/?gпgexJ9{H"`bVӏ$CfKY'K$cTK}O?Gn#כ)(`Dvh=̗v[O˺\oglYk{@J!3)@02K١g"*3N+xӀ{Ae|J0`dHWiH Ld F$x6l@(pat9wi#DeҊ(T9Ąa2*n: 0toKryZ-V8W 4}ܺ1׫TE@hGu/p~wc喿bvBj"#^U}]!I]w>w٥wPz--4t.޽Խ}nKvkᯣo.e6VWb[͡[m:֯/S[tGe[JvMGZ}[0|2Zc&]Onn2l٪!V赝H%^ks.\UGF͝sXA]=>NESzifc2I_9kӹ;־zͿ"cbr>۳Le;̼̌{Gt9Q`ؖ=tn lM=D,CFZP1_wQ=9U?[,&P1hœNn_ó1xnyC! H,g% I1x'rV^ygeCA~:1$;1k^,f>4KHYmP,c:{D@%$f'f-@++0j_n^~ƚuvu;=OtSg$rlvcFo#П4DJ'RPo{(QyfeDa<+C43-WE4RAeuOQrTPC~bW x`U[eCFA^g@}xo8çbVQiVfgw=۞w}hmѮu ZlxO,-8o.AɠacoG3_'@`Kr]@aDaB{BIL|2}AU(e7Bp1#$ʕuR *U^+PKOL^#?1&  Hi`vo噭vǒآ AޒwN"r-qdz(A?om A:Z$G߳7q,K@z%y˩YX&<^LJakbe@miSu-xV qd)"|oQtiHo@"ͫ߉ȅ>,#2ͬRJ9F*\%,WU{}!}s8sQ@*t?Bw˲{e5xzrW65f;XZG}Jw#iIsfy㉼1EVv3xq`*gḘ`P0D y[w`"1'̹誴AhUf1# k(6uܝqgÏmOhݫb?6oF1}A)`e΂cs9FYgmVX-7o`9d"cd9:Te sr! hHr`g M5x -.kg_C]'sEv^10c1KrҜOL`{ )/%ȗizZ>_~eҢܯw%_"a+rҜL3I)]| T_V_b5Y7˲aû)wp'7]!koX?/cvk~]klXׇQȨwt2ˉZ^ԗ;/|? Fh_iDL7/Wtj}|މJv-tX?|A:jˑË|}Wqyf{xaV@RLE!<^{x١ؾ<^$<^"%G#Xs\0ӹ +h$IFf^Ga6rĀ@  Y#}`rڂ`)j{˽ӥC˯͒Ǯg=1~X2N5w5:4_0>Xͺ [U#fǡhuتh$\uV"f~ڂ2ԯ`*ǖ%u,9a <м!r„1(er~?;N拦&GTcGB~-T1o?܆sZcgE\0EZN~0mvėeVOh殣~G.3´9:=# cF J+ӛqE@9Mi9:jQX W*We7}0?>%Ir|84:h:l6ߎfWuơT5jJu?*=F#ck|Mݸ' UեFeh b 4}{p7.\ο]t3JA qq`5n#e] I ͪfw=Zfni).gj[0ܺ,OE {)w?EOs h{m>xr:ølC1i7 ĐRmYž9S-X(9p3~gҘ)cBq|op6Iگ_ǻ7lI1f]clY(`\ T+ǿ19Ib!AVfUь:)^Z32[H%؎6df m-ؖ"?·zsiK>X/(-[56RJH3S$DHnT8Uy>h]=Bl:d[G=d& Kv>;f fn)YcQdBHt=k)Uʈ2,J@vN d&ɇBi!)dSn bEn1x-{vL4GbGi$(Q*$OC!V9Х+a $xtyMp5ϡC`1uS6Ņ%̆mc#џזh^#5FUķva٨mdgFtC*5D@C.  J Z;SD?5qܽXe%C fC2aȽV w[/>A@&#w(!Ȯb@j;!7CAJՎ2P(SP(p,ʈLhredä9lEOMWQ" XYHt0wL@Fbk=<PgLzU9 l"3*tL$PۦtdUɒ|!Z }AhKU!İ蘎'$6".B ) >@ E&rZ'd^0P>DO:#.Kc-z@ }Hd 7x$; cxk`QM,ԀGwf>oɍXżc3H jFF#AtDa.CKv{^_^3>w y S8lb+tmHi zԥf>#ŦuԆHT&Pw}\PGY䉺AQA~,9i qr,-AhWߊ1;y  n!z1!R{=D̛jF.Hs'YQ0 %,@GF}< !t6% ^Țfd $CÅ*AjD,G (Fw+`Qy KiAXvXY%3 R}9rj-' _ketӞEw4DT52kETJӴ,޺ཚQE2ߦ$g I{+Y:y0}t\B= 6b|^y~vj7~=1yL{j| Ia9ť{0:GttΗ`fFז]L4QGy[Ks򬵤llyHYDh4v/ ,oeAϿ;<AvS 3~I,`LwߐzPۊ0Z sA)aQmfa!>o'"8$`[r F8XwbMySla5è*O(bQRF3L >eMנ\3|ick0ٌ+j hNy8hc 3: $4AaMJ%+וV?- &s@Pm͇TC@D{gۮA٧KɒͪEqgAPcv գW>ȚQiÇ7`a2҂^3 _̂B\Y<2čtX`*9ጛ֞[\ieSi̖^j:i\@C,Vc54IlE5 IrPb:HP* tAyG=F9`Tqnlyi/n~wtNK!P@tO(N_ӽ'&ž#_wb^dv{Ň ,{傝);Or {wUDzSl8&pYƸ(h]:u'Nsty9V@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': Nuxͬu@De@@ON AV@ N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R'Y;"*qn8"rR߽7\z~J _/ / /0A?|z5v}x`>7k7_5O7z_AuWM1$??,zvG.wv$dg!5p{}vp`8L88 Nuc^FJt2t%p]\-S+Aҕd3=Ƹ ]3vߛ\NWX Hd+2tl*t%h?ulΐV*8uԕnB2֧PZ9?+ h2t%pYtŠ JWHW8֮p\g]f ] {?x:] Jΐb q!‹y!u%p]X-AA^ *MfbPunZfhuNWR9Uf+,CW&*h?+]]{N^"|aKb/qO9A{Fso|-W嗫w'+Ch ~A'ֳk?<1s mcr4Kwo\'c}ۅys}9> 1%D/ Fj z5%$'<{_wTrNn)7Gd#qySwQ3yWtr Gz(߫u=O>rm:܎v-%Wxd珃} ɟ.\Vrǡ ϴN~ʘOGTr^+ 6.'^7 ] rҕ Jtޯ.UJ&wtd*An!`k2t%p{Kqhٜ:] ΐ(:Bt%W]:JƓ/eVuut⟧Ju[-<] ʠtutby%hV S+A8Ďa#'i|,g.DWL/CW*t%hkQ(TЕ9] gQt\o*t%hӕdR:C"*+䖡+*t%hPFcΐذO+q9BW@ݏ<}:] JrJWgHWdڕy\kYjdN%g3hf!`2vvu%h):] ʠKHWɲ+]0u \W+Aӕd-ϑ!dcXgJƸ ]m2'JPZV:Gr< ZKȽtENւKhw/h91a'~Aˑ~߰u_rȆ{tdQNV0S{{_+}y}㇬tݢw" ⫻z?Ye~7]kmXȇ#!"ߋ@/l.׊EjIJrJrD %J{gzkzϩ~!]($(SpA(aV.\JH6Wԏ35_{Y*u_?ʤZ?x[ MEJM7kY +܌z>oG_jC+}Mܮ&Ͷ5w~.F]kI~6*!Fl(qIj"&H$AgR8#g4!:ER k=GWk_n.3{!sy,BNuq+H_'_0Ω+1@C0AkIΔ8"YAMejoo˲}oD q~d 4|iCw"s,;|f2.Bп.v,U\wݴX-6ogj뮸E-̆pz;ɽ}p^?.|ծy'χd$>\eY\ ,}1gɍod3|t?:[J]kL@ 4IÔSpfb^cTbƌi=A*띷tz|f6ENK }nAG:q+0YxIDSb8TxYTRlг:XY=pRV"tz2}F_znCϙ|Mod-4QL$K dFe:7]\ no:Y J8-@Y&E)sF!Oկ>R4~呪k^y#'iK}\HӴ<% #U:sjkTNl|v^M<'/#%\ I飕9hX }wKۭZ+=9'Iטd-޲?x#{_ EuZU'V o]l<,Z.>-eXРf_l., aӤK!UթhgLPza2tT|s_\vbG P΂J"Jˈqa6uqP}r]'oz2_KP*F=gTȠ-0,WZ" sJ}@hGItKI<IR:IP 'I !=FPc|bii O]^t F~'ءϷV+/=&paeH-LnѲ3euURz3+x{jl_oiB|# E^"ࣇ1lb"|Uv/50 yFXH=<'/K}BRS% g`ZIGHD& y9TZTL ::FQ 1_m ]BD Dy Jccxٙ:e:trbvw=5=?n4Yr}^fLVB|&lo!p=@]nWEV6f-df\u{DWp{E)Zɝͧ2.v5d [tzigSx.?.w]V#5KXEejG ^M}h!Vr=̦Vww߹O'iog%|(ݎ7P{L,WEw ,\}]sW!S:s+qs껩vͧ7.2JSgɘR[[`DUt:XsIL)=yrtl̰9?h ~f:؁%ӛ5ʪYfYu3$& ƨ06i1t9pBH^LKnt}i =}yoWDMsKɣaY +0"'&TSw*%HIp8+ȣ1|#0hM҉ -8RЄ-= 1؞tuZ4l[p`hROE]3ۯjO{%;K]U"F5F$ 2@Q E^G3r .xCeCp 1H7ArG00޴_|Mm5 <V%Ueܔ:d:x:>L骕t{tFs0mP'țOmgC6VO14.2[~;&T=-dSNvO5eYreɿ{nF@I$6H؃-mI9~1ݕg~^LMSl 6Io; Ȏf3vy 94#AUSnUJs tYf8`~?tBV8޴媗PkeL(n&:0syGV){&oxmˑte׳m UV߬44P5:-|a~O5~4jeSckh%ɶmF]d6͡~uM~l'GN4EHc ZQ#prRc RO@nӚ11ѰYCsh[iM4>Z/4i)-`I 聛\ڤ}$e s1gFFTwg&SN']][>%r“UoxmH?'+_g4%2s]VfHa< ZiMC qc2N,|ഽ'ͮ;:RL7tF7*̹z3gtguy+gtS?;klFn8̅|`=!9.o2CƦm"BJR+@ wu.!zsw^oaHh|?I't7;8[翌dz;.nǝmO _JUGd_@/8g\Nb 9-RP uB(!AAT=77< 6G7O !@x[ F9 .yʘg&Op ?ƞÅ/޹LvAޔ*)6|(fqQf_0krK b4!^!ז@'&^/zXgِ7EZ:DM+qEx$2MԒT`, }baWl/~|3G_W3"Z6L+ E23TGD@GZcrOҠ2ҍY|𜂳1i%pϣpA*+R+-"Q$W`tP7W1R]SG Z[BqTDUXfA #1I@d4;RtJN .~g'~x  :za<``RrrL[ SS3's@|>ȇN|,^U%$عt~j>lL%_̷V9\wmmyٳ8/Eh`f33 6䵜d~ZV|k[ݣRK͏MǪbxOڒ4dǹR:CHi''tW-d[d1%`ϒ D2!CR ut":(,C}9Z~8*3߬m~%ͪ|$PѸ$k*p76Mos * ϟOKzB[$$~6_xo5p}9(SW&Ct];-H;Z> 5->?=Y}]kuQcuޭx]8C}:wgųrL04̿#ђ]U%0׷aW3jw3jw'+\拖 *%Ͼlٷ:VvNjuW_1[3i^n4?.dQ'L呜,VnrSoҜuu|uy'h:??ˏoT__yYI|eCMijoӴFMeZew{U|mi:_r[珿??LAnwYOKN`%&\4*Akm4XĨ$Y#e[eHIPDN2+H[Z %[$􈘪˝еjTY9B}.}|F=jS~Q꣄ٓzY־*vAdEsOڐL16UՀ@U0+fT .!)Ɣx7Ļys-U<XKRIBSQF_D)*T 8?&G ufA-oO}ipaO%詳yr=<;S~i}m;jZ/ ߷"iN}k-n[fj_keSVEPJ/E'UtEA&$~YF P:eF\ch-$NTI&C|#\VZ4*T;]3.lfܗ c.đ  q>fzHm⛏},,:3&fϳ'|b&Q" DD.LVfT璋:ND 'iِ9&JnPW^QiŴK'|d U/mMxq1[&f_Xq_Y{`.΂Y31 iiHe.[>cFF%9-Vc]Hd J Y &* %űp"b&DFuq[N9|Fl~ܓqd]ir ֩Q,MLR^UU*+M;]THMH@MRlb@K_-+ Te]i+ٙY ꅉsvfFPGe>:;%E1/ʑG^fQ0LPpU!Md%I@rdMCF^| ^ v6;ˇc>TGED,ʇvUo\F,ws`maяOf+^²ιHCqೳ 5> v|1yՠ2؀uhE">E!wuJDꔈ~B8N#*DQ*J Cdt:dD,&ҦBzy+'msZCNifa _ZPO ݐtЅftQI6XYZvvhvEAL8&/lh;?YdhTl2Q7slԂmZV@P{^QIq7֟ܡb9E]j+L'MI9Q @^Z.uNE\JjnII88g霏eR\ 02\Ũ&dZjgpnNOivsٓ%10UHP"ˤTO2⬉P:(Υ zU%?b @"q!hT:}*[E笀jө(w= )!BCF!}jH1#&\M H_].,% EoQbK UyZGJEaD-HRB,u (,!,cxPB8oeN*{R\('xct*1oQ_'hoڪb'KA[?fh#sqiaSJRт{AFL LA 7 b C+FM Rtute4&C+v0tEpC+Fw"(FZ/DWpAq(th-%!ҕ3]1୹wbOj눮t( QFVr0tpa0+FkmQh "]y]`Ѯ ]1Z{cQni#]]hۊ>f}=:z*7mɋ6hc<0CDnW 7.V4g48F Tls ukdzhd9.~?IaN@/?MUK'd1/WYD?߄l-@=of;+=-5~íWj!+/ې f^e|?=9W]IYTN3@_wτ1& `ondA\\N{ i[H7 wl̾י wBk؏p7[ AopߡVZ C+FTQuƻѕ\sj0tp ]1Rx8]1J7!ҕv5 N iCW W5t(ntu0tE*iDW E5V{񒑮s+ v8E ]Z#t(ʢ|Zqz(]w GbBW۾ӕXQ:Dr}Htz8 wh T[1F:BDd `ep0tpwŇbbrC+oBN$KVJ3\qHHdQ*s}giF҇^ 9]9|< ]I[߳Wك:Yά{c@dV/韨_4AYhV P߼;t啎dCNQ h*ul xkh-5'W|idO#D $|MMUF*e ɾG] L1Ҩr:-r㝁&%L <;yv; ߳Es'Jt7[E/,wƍ}^mW+ ]1}e JJtutE:v8 "z0tp ]1Z}+Bi L s~0tp ]ZTt(Gc qH 3b8"F J]`Pb0tp 1(G:@rvJW x﵅W=i11(a]"]tJwE+kPbw %qg+zH+<@ ]yqWHWHW&&y,DU:w~{Y-c].&v k׆OfT] #KQMY 䊏Q!HmB Ʉ2+೎JTHx$Q5"۳WCqvKg&/NO_Czӧ#,j%Yk'7n}wټK\sjo.f ]l.XFkML6&31Vߞ]|_&/mmiB?a{Mj ;7:7ά]eI!6U2&Elj؁6Q(7P|D_0 ҞQ1v_wL=R'NR=w.ځ 0r%>"WBI"W(W;ei  ~.E@낝\/ je >Zg+rԣȕe~ \@i"W(Wf 3a\G+ (/ruri+ȕ?\֨ٻ+4ʕg $W93(ю"W%=3Bi"W(WAWnZ3vn #+? tfJhy2eX!UӌRf)ScN3;\z̈́vG>$}p0EO@rJ>D60o:Y n妓meF/OH9DxeT`y??':cLqk2eEB%>guԿIٝʕ;\uy2//+tv+@re")=\;\ ^nޝLɋ\\A 4\I2PfZ.WB"W(WW#yL~RiﮄEP+3aJpzTڠg/WBiC+V5\ W yWr%{}4TZR4wJM\\L#] aܽ 76h5wJC1#] aJp0kWBM:H:%JSOhxK;j=M64v);un̋O6r(xLeOcl/4ud#/#;-<ČO<92 0vqq~8Z~.քqΠT^aLo/w/y|S9F\]󹵎ހ}"Qz{M5{g^Lq,*P=Kܡך\vȕ;\ s+j+I_cJw>LJ(^BH]F(r%q "W(Wd,Ǒ7Q ] %-!ʕ3@r k+^h\, ʛ1$W+ z83*C+H8S E~J(E\7bMׯ/ެAp{_ۛ@-e?GI/Vm8:^ z컧<`7f}ˣaA>)!oŏ2x$^|4^y?t̀狷߬n^[}a_ܬ>iwGk@_$p* Di3_请q k}[UOrQ뫳jWn$EJW?]Xw^13ʺhWN ~[5ߜ^JD;]lp}4v5fϊgO{T7_ȫ @^ NSZ sjB閕Cjxa5Ri)#W,g/WBi:HFm*.ma\Ud2. m4]&m$N9eDOO{?? wߗSN&Q2.fo0oxпh u'R9fu{̷9;?_/ֽGwfȮ馼;p/_ t!^\iA\%Qս"۷):ێ @_ӷ%_!|~nD|{Z^w+4*_\\!X{Y'~n!;oY2w[><ᓃ5цnd|2s>O w]Ԯ^a\ 5#BHv6`|&F?k)ia9>U ϔẇݲe3p ϋ3)ď3k4tS#=,mz'Eu;.t-W8cKBGbq7탭j9*7ŻCҾd /ɷ|g~5K]}D{{Uҍ~wq%goj=.%ozEZMrLΚr%tRΒv(g}~)ͪ3ktNZBn38N9WS5>~)NmhLR+q" TĘceF0K&u5gՂ6'BF@~Zj j 9Z(\ #j Q#wm{թm.{oE,`}K-zwl֊N5蚦Ԕ1bN",=Xz zɌaxhk#Mj!SR+{ <L^]mluKue(sՌ{ȆlYޤ֚e9w Fa?ɏ !Ƭr6;'b6{WߨкGJGQ!/@xוܻUgt,@d.VN05!1ї qmYwеYXW&+3\5ߠ˶`TlT䱤[ac$ƬG6ACYD@ %8ٻV(qhMԓ)e,[` tW jX9M eC2!խj}.a!VУ4*(5 76i|W2\W j,'$$`wmNKNb+!7ê b.|$%0,ED&\ !=+FC(dԭ9nŜN0t \B 7jĥDIppgU U Vr#3ص.^VajoLEwf%RI9n+,SavڳH(`;:-BRFAƺuu3R1 [Y+| @1 L>O;Yi(^BdN& pH#X.!8rïEU;uS Yߨ"bΩFۆjBZ!BTDvXaݏO7y?](ɼ ȓ8COE&X6OXk|lD0Hc c.56h˰;sP6DR\G݅Z1$Klk`)YZhLEQ,,2m*Zζ$;V@@䁮azb Ĵv]Bp[#h5L5~ 3eY#ĕ[lc:; Hd4,? V!;35\F?Äڰ*)᠛FeR]![%VKq?ҝ,IQ55Ҁʬ$1޺;6Rvr꭪U۾2NKX ,/^i6X-Y*rU' .1'M{}vk$9٤Z(ctqusYxli6'ན*!bjЭ9ךBԦT8wZ{ xj2C)hTX4 < -ȭd~% .T6jgC1IЪXU;\<RL;FfAf%dEZ"!2.(dם<D 4hMPU~5 ߮H(TvƂ"DoaL`A#jNiZgq6[,sò26Qƹ *`fm}h[iYS͍A6QxxOx1kǯk^`+͇K~.S,sAW6Ha2ݗ%fG[:ift1^\/荗l+,g#.!cI^?pZb=gsb2߾ܧr>W(fxvfVUY>bam $~'TxBKXF ~/O*4W \ +4tFkUI o  @9@""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "vI nD4 k= dq["^# H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H ""H "^/ jH$ff8$VkeI H"^! q D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$Ы%P"`m'PH MI .4 @RZyN$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@D D$@zj=N]Մ}R|9~f ņ.A-p%0WK`Fh.p鋦Pk? !g\\EZ.Whϒ\lA$kP uϮJH^\+\r0rZi++mC~ j8rZ{_ r&z=re~HcW`Bs#WhmC+|re dz@r;6% \n"WhZ)I^\9q+0+6Bs`&2^]J++5dW[s~[iRy8r ҢǣŵSw8 5bXc&Gwont>[,O4'8NXci5PH!6-w HEs`J+=_L#n/Iˬmq04?ZII94\*E-py,&up_j%*^d|jPNF3k5ڗVWGzӏZ. jM7U D={/p}^'f'}>^i<.ZD$ˍZ59tpN|# -85RبMC .jNz/[V>P!'ZpE8syيq:2=#h#*I;MϭHpc>jh\rZI^\ \` H`#WhC+ؾZi++\̅\ EZm.Whe?Eɕ!` Ub(r:w½8eWQ4lkm6]k<OJ9%l >O?8ΏãYo-)wz1m&_Å8CFgbiD ) }oCSoMPǩ})1`oex W('EJjk}b,2fB}{r4.)ĵUj ߟNj0KUilv~KY]ѪaZy"y'ۥ@p~ǰݓa{ddwܶ|:TVw[ 춒Vo<:ۆz=xv5ӴR$s{lgkR9yОeYb7%cqBFyȝlRj_+^_ˤW4mo=˭zn=d%߿-7pi` bw|zd}dڱŮ]:O=컽^}/imθv呟k؃{K6{>nfGG(e7'AJ僐LL)i6K0a|/(Ƙ}as/]K1~j}C?_moؾl82r!'nk͏_N!7|.brfwm;@N#}-&ogocu;81|D$o@+nK-bf h^A R~;/8 Zucw1t츁^ ˬq,Y[.%E)Uc8uVw]awfAκV;-o 1:rJvIw]~"pZMn:t}l!7 s7ݷu{ٲ{%l^ sZP0r%XX)Pyn OSP6HVB(FiNfsz&t%S]E0(Z2z,pd;;CK? R.fʥ*f *2)&EQUW-Tsv+$RP0TVbbp-иz^gYu+aSDJeD28-ΈlT- ;=՝|طݦfhg1·cyyxS쇆uD|+g0B pφ,sR * #„$jW:+sioæ_ۓŕ$h2Cϣ˓2-</Asi!:˶y78]Nwhg$rLN><ÜFf~g>阋|9ؤikJʖ #m3uRPJe93ݧT7WiyهIB;ȡMeex9Yd u&ra9/2W˖g`5NC#|L[ɒ*z}H8)\تp<@҅`^Й-%bt>q_J˘Ss mfj ^yX'Wғ,2?>Ic1,B1 YJ<`luJ3utXEV,.gw4P *7SIo@RD⩨AbpsqvfZ>9弉P.oήz3ݕ5}U|Pz6]_\ _4WhvY>Ks%|#7_U>>J:U_G&KB tE&Qdu ^ _>LH*Ԛ;2 އn'gk5F;r\6W1Jv`q]8#Z "յ2vfrδ3W ]ZH U -.03ޱ(`&30Ww? MLf|+6`٘K5CEbyˢOڕxPq8R]- {!VD!ʦ46Vxj;|MMd]NJݙ9+vMV;Um߱j{Rm`U%}x034Ȓ \`E˵H9k}zX!ATȐrU4Z KFᒁB>1WY:̹_ۢz8ƾ(bg" ֭"GHsTej溸 Q^E0iRA`E+@tqƌUP$`b31PZ1pLK *i w':.ΛUpi3/WyǺIIBʢ#x eHBfgpN0B^"Tޒ.>.E;}Ptb?=| [tWE9m{Gk@ss7QF|$8RݯACj>TGnJ%gj{ɑ_S.I_ .[f^`W[FJgղdeYlϛfW**bU QFD:(#GWZQjyG)RD<0EP2 H%RAPCМhJew @X@N%eLI rC@ks 3%TI1vh5 }D'??CŎ-R;g420zJPl6φ;N:rx) ^ZJag×p zZZ#Dc=~v`3B?7!'iu ZsM;F}#*bst6A l2D/fۄ25!BP"O3S7Qh 5&n' .<0,5X2k578)=Piưf_ӑ[nRrI> |p/1 3t+ꣁ dTe4pae?x9$4O!x]k_PC/$vu©G6mz,ȃ3ǜ~=cWqܶqiD`5gr2pѣoYz}8kb~%u ?ȡ?G{.M.x2x/$/ rWۆm[>Š!~c}Aګ~g~=Zl< D{ճ4[VÇ%e)ͮDaUV +獬S%K}aou+CI'G[Iw4 6wE'IEcJNDu$F? VEA$rDR8Kk˞ b9RrйT:m 2ʙLܜE!RD[)Q76 _oEYՏu?f(Wv S\C&:7ơo%F])6x<=0b@K0P[puN#ʂܜ87,[X, 1>&à{ 1ǭD+jmQt(h &ZO Z5 T2[ 42\3CMP *'0),CsT9*x hْ҇az wNw+!;l\M3jہ#(,n73zwnx<$g4( Xi$cTk1T9.BHULr:L7=<Rc.y! j$t-}Z.)!ڟ<[?r~Oy`8N_r}GG Z2tb1C|$ι!A-("A7G;mzGa#RZYסE?J cIg7t3؞HܝBx ^X}ncl(9"qiNAX"D& m8nrݴ^Jh /;NPB{IfpaaO|'?V̮STHvCYvǏp܉ٓ:B]^ݏnw|$m<ۉٗڅHS$.Ifߍhd4h.TFK2,.НnНNjOENtޒ66I2Y?Lo#B.'WWY%bVdK){F4`$D4ʼnvm ÓNE.yijgO&WWJG) e4IzaDP]P˖!̻80 s#I I[o`x`ɂm[RC '(.Κ1Y$bfzdowL[lI"Iq6e)qǝFW?A$}H }v\'mr' m qw6JbSGTV&%i 29,2 ނNIq^%ЧjWҥlO'/[zå9JxR_u ;MPZkn$EhU\it2*hE+PѨέyӘϕ.+H?}d- R*GONެJ{I^o2~IGݸʪ>z Ƚ,k+ 77M] 9F;t[*E 0)!$`up 8xVrْqq>r :| fkm J. Uq})44>A%>7՟}yպ✈Em+'ZD^Mlh)ֳg=ĞdK("s9'{K'(X(`D1ZB.ro4ʦD) 1PK8έA xGU: ;`i ;X HF[x [ ٗK*:[Fc^  %ОqQ&Q8Wt| dHD*pF"F 63E9Ȉ*wLoM'WerbhOM![i{rtYYUyj`4k@|+M^1b~2';]y73O;63؂7\\hZx~?=V5p+!- CZk/t_\vDG f+I ShDVz0T8JA|a$I$ 88 @Ei3!yIeIMNI+G3nK|Aƃwth./١u4Ask&$4NR L@D%uV(([KP:-[Dy흄OSg;yTz~h"KA\ܛ$N"+xe5\)כ5yMIL:DٺPvh(Xj#pjk%NR^EJ\5x#zzEW Y>{6hnχ~7V]23\HIT[TT兪@x!!" y  ΅4GBp!E߈@X@BVI ʔOqVDTTpRʞl52"x5@k\yDWK˭4$;+;hX~f-{!ϗ'ݶ.x-FqY)N =]~kvJ@m'78Iz6g~Ž8Xq]Έ>T$ _tஇD?OLi Fr ?Aza'W\I fݞhđڿ$r䍣$:N(OeD;v-^%(}J?!ZGN/FQVY|]mo9+|;v"/`>e; ^5E<=Np-Y_݊;Hj٬OUr';^[M=@eNx8#?QoɅ>n5uQcu'_&f_ V[l`5v92/?^ ,ϿghMΖP_ҧfdw3Cۓ-+mEO0O?NfrxԍUspɓt8t9GRTdF#~J-8n@,78ڍ$<|Sw7x!Z"{PhmP Y)c1ȕRQGᲨÒ$ %OLقVnpyM= SR 䒒$dr )]fZiK5r0AB6N/{: mN;;{sӛj yF!n+Z6~gs}CNc$HT^& Qk0Q}ǪJ{*-r0VZoi2˫WJ V8YCCP#%8(FK/d"(ZZ )X 2ɖ"Br"Cy]7qn&GQhHZwWҵo<3o8zvzKP8'|"8R'>{ג&oW}[z}$0/ R\F+׺+9:\W(wIinrԯrޚ~5y&[&={a'B1MjG&Oģ9?;JW/d $w7)gϫFOgvO~f~^ִ/xu;`<dSSS>>YlJrP9nsk+k%kOjtnO!vM5_7'_SEԌ Z2Q582b˯o9!>9o(̹JI;I82&xހ#oՃ'X`mkbi #\=C =+ X\f_UW,%Ȯ#\)@bJvoK:tR"p OpUî@{+ւ:\U)юp J [٢=X)iɡ?Zi[1ߟh\IxI{9sqYFq&FFoQ`u`!^𓿻O:~ǿ-?_3J>)~/+>T!_{"|*Mu*U†&YbАt }rdVޛܾ]<ծRJ׮v)"ާ;֬g|8և>JAJnT +I4LJ!gv?X(UA2`)X%HJ-{P:@HIDPǤl>gS P) 1,"%TIET iz>&՛8̍p-dZw:ѢӴT޻O$!fU 2՚"]š"4+1Q΋)^5$k;0b)8!EF+!SVس&͊?g'4Yї&. %Dod ?4S}i4udbh?s5S\_IX,ϦoIH)b.ŔUq=J* ^z Y a˛|%p"Rsm*:sd(7qnFJoX ]XF,|P,HTv|R˫EZ]ldݜoeg_lq:>~GFl)!cd%dfSRvdEkH1+é< bEJVTƺJbضDH\D}s3blTg}x؛87akGӔ+0{;""y#"U&K o.@gXˋ]la~ kJk !-$<+HH06P1XLHՕM6Pi1%-*o#S&͈xzUeV!:{%"0∋זqMh8уKL|:=D[V{ %Ɏ80C dB$8]dRF* Έ&|!&RBB4b,h JYNkQ"p"mFR  Xc׊y)߯A *W/QE)-)leNefyD/zfֽs@ȣcsxeHQ{ÏZBYVDw¿/-VW~f_i|<vl] CIzf6{!ɜ D!Ȑj:a^je%SBn[$nX>IIn\4,Ǿ>&Ja]tkV (sՎ-NY]!oB^V`z[FlWv~l7>;͗oVQkfi+unxaTS)46jW[gmpԮ۔]|vNt;u4v25HH1b )l0=ɠsL!"0Ӑh,2]R#?81QXek>x,$ DI9;'4D ->' =ĹE.fe `T<ެ+LD5)],NɖISkaF+$ERw8 GSFLe`,w:X6`ɢe&Ǿ&N6Nm|#sػ=3:1g@?N쥤0!fim bpVzpQbMWDjj\P*:DsfW8"o֝q]77MNyଧëɿVs)SatV]):rWѰ^bm_ŦD!9x6)Blsy.1ez:ITb|P%669ȓZ-qq2K-<-*AO.2ܲO 8ǟwջuq8wЋ*]wq]AWLNݞUxkpv/e:'`C6j^TbmԭґR=z0Jб\ZحZgmh%Ch>tXc745Abk^lv#68U_vkgljr9*' s*DOUuՕJIъYUȳɳ:g7wIp4\qzsPzضY-V=B=*[&ܛr,.F,b65N1WY_USczɶυhPx8,(0v ^Z;z,KREIcV~`ʤE-fQP7$kTL̹А+e6HWB-C @ HlUx' ĹNϵ'/zJVzBBjELJ6d-*⯎Ӆ[tUݛW x%_лۓ^gY{9+ݍ^9M 6$^xKZZ{  _P5H4{끇N.'G~2}}@e&Lnq 7.6+4BX \.QR>y+x> h *3n4zmh[X~1 uJ7Z B{*U(}ٟ6;9BLњ;P!*? ~Jd\ѯ(j;Pi( _%kUK lP뼽_4T+Ng73\lӝ IWWcz'?w޶iVa:O yڴlbU@RSpިB:ҹeh}~7}׺O<^R1Æ:~\^$i)9Hh!$$6ƉB޷{u>:)dY+4wJ =Mq۵S+n*s6j)zΈ|"I۲6.YGX1--r}Hh$A$B劌OI0@'P <1'U46(ׁP$V5בLJI;h%]v+]p,;ͮJ?OP7;xG\CU8qz3F]YMpn!,xi9&$h}pZH3t|BL_2jm(Yz ')Em[v}8uH/!3WUk74`J(:ʥu!0<ch;i%wOз 9VxoGP^D\PB51ӻ0PC>aY?ӋFYBaf0{!w6Fa :;AwkokS.S(+EV}]0Khd@VuFkEl*wSMp{\ӝnFzO ;AlF{Srsh#ttwTASEF<{FyCYc$g@D*>[yt0)*v `}cNnOJ')}gY/{[fL(r㔗ec&\}$ut˹-^iMwJlFʾSF6hrpL=㧊Ȥlʦ~1diͲdg$lt}=7'g__n?xO&}E$Fiw$pV9f(xI:L+iE 1"Z6k+ F:CdցkA:+,@z%kYdԅ5 S9X =n?ܶ׵vs[VN;z!it۰f:6&\ڰfl<6K]֯ /sox" =~ů0߆!6sWkt!9\ΐ{!'u !ͿC{FhjuÕɧf&r)Fg~߶rǩ4"( -/gP0FN/5/I DRsg3)Q&BHRNX7蕴yo1Sɝ ]do잰%D@X@@B%=> uZ*똷歴WrUMjdXrZ<~Ů7ξy!F+*` G,>tz݂lr Zm!Pe,J) $(Ug|]u-Yq/KƟӀ DˀlLQ6y|ʍ%.yS$DD:DOS#[m-4khHR}iV|(,K9-@")4!^h6i ī#c aހC Z[ 4*8kT4*c, fI.R3HRhZdm[Cqިg>-SCW=|[fbNٮ,, RO/㧉iȌeĦ"TD -ҙ2!MX;v(Ҝ w6Km#AaZWVA{+#c:jtJڽ?42(Qx*kqqAX]PVXe$#1(ZV@[Cg&v\~]즷wbˏ!ws!o+zl5vp->NmCUfv/t諀+q4ʤܗ8.`(oZP]V UWK\cIS8a:9]p.8v|-V3Cnmå+2ބI tt.Oq@"UމS{nl $Ir6I$BzF/0g핎vyV Y~su "KěB h(,0hZdvt8{#O:>bOBT[{Ҭ܏9Qm#>8`drP"='*@'͙Ȋ41\w!(etsrJ7Enٝi4 Hә"W;X@Kbd͝E k(2E1`+ xRZ `o<3^!MZwgt)JNe zɩRr*ckĩ"JN}%@(X*8זvJff\CR#)"edL :98.N'i/\ s,{Y¥0cu@7iE:+=a4Z%tڤD9%051RHB x,Xp˨BdhD͡qHS0$m)4t1=3 <|~\7~<LVG;6xAyctN;#W"8 K]4Ͷ!kgIR('뇉G1NDA|{vlv"V1V L +ܝ9"Zdz${*?0δ~x4t̺¬4rϣpA*+bFkBJIP#x \YAɎ_'_cM0RȠ!GOTca 9i A 켣F;Fj+I8%$xu8`|ڢ\Dq5<T+Q[ JyʭL; o)=Wٚϴ:>M/qy3w 2|F /q/?&B=%Ӽލ|cr;QÅ!p(?0~p">fe;0܍(0z7]ܑk8_$޺EzV D+7"l$F9=7 [_U924O0G6гCˏKΖBQo-xf0Q}TpЂc9~Q8e Ҩ\=7x|BkcB譬ݛ 7EٓOo.+fZj3GAf,v9?O*o;oP6̄&sLBni< k^2G q8*`|/FOsomqVFZ/i֦J)(i#c#ybg^FȟxT.w*f 9ջ]W]}epo-8FkQ7 ijQ0@ySK8_=5檫 aYu[Kn!`Ҙ[2z?0?ޗ: ‹Ss-ɽPF6݇'=_(:VF/톿 !_@LrY?(y*ᡄ.<)a% q}M';n˩Jm}K^6hHZӤL4( F<Ȗ81u=\54y==I~aMOު,=}1jWeeq8uz' k|F/-xY c.qxqb.Sڌ&Y}ԭư ;_3&%{5J-$tM~}]͟&/:Fwkq/˷*7l7{F^e(T%dGҔ$: ]Q0޲wzrCv "UR"w*e"ɰLBsc^l(FԨjIg,?ct,ٖǣ(8H_(A:ATϙ@!3Rb)M1 ,owMx*kg5FGkFd4AF (* £gsΧ 6U@e .FOƈdQ(Q·t)z Xm yTdT`K&X,XTu 2q2aMT$D.K?ʲUx~ fQ0.Xʁɟ#iyC8 J3H!XPHпc^C h6AA+nlX=3X gX<1p8$_Ǔ9xI)Cݍ]crl}/8{a柷񦣍uv|*+ЂD52 H&c+←Yd[tLa"zJ{΂/{lɿvsFv44G>&?h6vu]g&wCqソo{wٗl ?vLs73Z|aնu*-%uCӮ'-RzQOj.(Ҿyv⍣V'oɠ/錶^Sf6N &ߑ a>Vzh'j@i@Dc- tZ$pS(BYl#UAچ~Glͬ[7e hƜ"bƐTE\d ƠceQޚk&Ξ[s;!{@֊=G^~{?_%fe *"%9ZW!+AQ&E2F Yg**Ⴊye (!t1$d%q??fE𩱢6iEE_hZtx>̵']l3wx]O攋m~p7iw[Q{om4mj~5ns򟽲3 YkL^1 6L!!:w&ܡSZIoEb> b~pKaBC0@ BELJ N)Z1ob6zQ@5KPX”6hl9(zMMXDh̬g c.?~)1y-Oxk<-E%щt}M~\cv1ٝwO1m:]dϷۑiӒ^kq4u祜Ԏlp֪d*䱔<5 cL\T_Հ"$zV]&JoG4!uKiy@cJ2Ş60t0(;t&!`节z^#?:ʫpb{r!_r$kOICجfAƐM&R c0t&K>ԗp}7פW8X팿տ*u"fݹ>v{#{#gX! l_bbU\eK*QC/VX7+J^0lry<׏}8eLo߾40;U&QŽ*.: C2vBiwpDB 1 .iH]ʕBZʔin[$@*!&"eQB1Xc xBtuBֻyzdo U/uu}=`/97Z$4CyZ c"9r=úЂ^-؁DgD*EMV9#K7pziE/YS@YKR'2rd ĨS FlvV̜K -(|}HNs-Z`48[O!!P֫hI~WPd lH$:J4{4骷o>L*oBL~_%>Ogw9/:].{dHztg4 5qI]3?~q3ѥe"SMt"9_$"Lfpϓͺ|6$ϩ\܇AFٓ{Uڪ} gr`T"uhcBCf+[d|#E/ĿG Ɨ'Žp'f6:~|^y_XySwrZn۳KYdVYfV핕ZUT"N:0bܐuy|/+^a;y~ƚVGRɓ5pR\,9L#B@Sr.Rm챮̺L( i'lo`ciA%NjMݬyj]6ޱBI+@%pZ&zX" @%\jlPH(Pz  0X&Z&F+1JDO. ɓU')I9R2z$)(Um0#sݕW_Ւ|:-&g! ^xa|awFT 35YPժH s2y0pMێlnF(F>B=Bk}_i|ReU|MX,t"<+$r%?7k"&ZXuB;+m#IvIyD^G=hwc!)IHz}#,:JQ2|U|2"2N|>~˵Z7[i]Z9[ NR r~ͳ76i<<#:!(m#ߞPCf ?q8ηM $W\ؔ #.VAxC@\GoN $ޓ^#gO4nvdZ}SSwcs㞩m^ vz媬/}26(:@Q1ҏI}"`+'an EhXit2"hE PVFz^ШyJ'V}e?K"ڶLӃ4xΏjY-q8NHۊ8ea4F+Eg3ӰyvH kHä!'FXp\ ouhb?$-A7Ĭ,u;='*aTF쥦,~r>ځU)|!嬉Ou c.(\Զphj5FK-VɋɚOY֞(: (Xq8# woph- 9͵=o6%Rg-ךROmZ ;c$:'-u5ln KEHHᚯ,]2Ě/,S9Z p}i:KsU[Zu9Vͱ$1k\:* r8]Kz[_& VrϠDD/('tHTϣ6z9hY0MFl_(M+x %ʩ DAX%5V|ZQaLrh.Xhq$L(ja(*TG#u` `DK VH Eଓqo U%<}/;mz=OKpKMx%zR˫W6F5`'ܪwF/Wj6l@M69)*[@BRJI6^(DU-cU+X4ߞ(pSMrcQ>sʺ)cI u>wS#qWsgyjIO}ۤ}Z$^' y7߮Ei"(!^ ʽ2a ˔}#chs~ICh67T$52J1'$`Ҟ-U1%G-1$Ol+18__OY:ΕRAݵήs~!O*`zEYHaiLJr"\bO˼9# r/*~5䝝~I8CG|׷}O9qu O E[Rrg 3\%2I8"+qC(YރT_o,:l[UAڌm]D^tі܈מZ[ry9,Uxm) p;,*>|"Tץg`Y(jts9WT/:*/wf7iU`z<`N7DJڹ,}j'zTC^x׫ iV Yק,3xWd,%I!M@4YI"1]oڟQVķWRolꝉTAtVA<.2.$ :/TbД"pRysVx´DuR+AۨK\h&Bu2RHp;m<8T-\D"Bh.(CS0$@ZncpִS-ٴ5Wv;H-G~;sҫ.WB.)ם/}*!FkעMg:%BtQqp gX@,Xhtqy,I20{b4Vu)JvyaeLȔT Gan>P8W!F6QONFE ۏ=B9a'WRI)'v:_d:CrGX\ՒD 87%\ Ihnq܆b5rgPT~\bXvpeK)(_(;G rvi_.jѣQ8VG}nN*w8LIX~_yW^xs3|U0{ Pf }8Nrs7+Ƒx2fVS[(_#$ =}whu|ExR< ]/sem=y l\_!/(Q!*VF %h1@L~%~sb5; sJTJKj ,&A1@N }Q*=k2"{9 *kscsz IݽuKQ6jG@TW3Z՞Qڏ0 SCt+p ]!ZAtQRҕ@eGt0c3tt2Zz(MOWHWRELBG*63tp ]eܴ2Jѧ8;FR0:DWNF Wv&#cFoM{w(JK&MA;U+:CWl=]eJtute .e8s' qp x1E,|$<]W/2 '__9np=o 8#|9҂ 0ډКkC9.ϭUh6RBӈrv(M"z4mZ}3>qpߗ~:ގq"e@ͽXJdKq|C3)pZTEe)F &ۇOt,]_^rBQʼ}0/ Z8WpYLotlw~}WDdE]T0FM5̃S9! ZHss#?7C+QJI۵ O۾z!CtEˀUF*tQJS !6Y;CW.7]VUFixOWGHW\iM!*Õ+tњ+RhΠK t2\NBW7*4#+]+IwL.2ۮ2J襫c+!ʀUw 5t(5J2#` "\I:s2ѲUF 1ҕ6R.e0펩=U+Di(H+n- /  8aPQ]xm׮^'b8F = Nd) ?*$᳼>|0 {]|7Kt$(H9wX# cC*1Ʀ<6dY;[%w%x(fWӌ-f9{EE N8S$&zV-%[NBakͬPnCF 32Cg lgFkr]Ԛ^P=|f5;0]mWhtZ8f(eJm@Wm_=#RR!ʀ5n3Rv2Z mm=zzĘxn"yg*ժ+th)mm^=]]nCtAt2\e05W#+ACt8dla fS 11ИU6FQӐ[z;(+$<$~&?ε&J̕l-5֩ (J!©YOD*K!ssQ鬪$\s=s#L mKьdkF_!^Rܒ5ݜBODL;α'_s̘HHI?w2KjJKo%-ŚP*" P[ %#BR8{h$dCiВi(MbuȦITчl0ɺhᤥJ37gb)E&(6Y24,REv4 %֑]Z@dM˂HJnIl[e)-x}fh.*u8:H0ZÅ]p}qDo9'Ḁ7X]7 Q 2BskB-Kb1OuSB2fC(nv4giFˮ ! /HJl@6lmbiCm%ukw˽Aq*!ȶ%b؁~5:|suEX{bئ!CEБ" B8)m)b$TP\ZD2 W4-.3\5Wf$E" dt:zSJ@(V uW4πq3F-[@X#a,JLh_ M) Wk*uk)V6q%P4yȠΌA|z$ ]3U9u1U !0!a%A߇y;Ovיx^Sp˚Wk1Z ޹B]wr.`H$xG 3P.-sm:@G6]%.KWsh#fզUe cJXYo*%tA\(z+V<@(E&rZcd^[1P>X,LŲ`倮t/1EͰsUx$ ; tXTupt5ڞwt l;V+.$bm1~v?>]_ۻ>wyvR)\u[0t6FD4=$@N`/6sP6D2RS@݅ZTp#(nAP> a-m%|IrFE0lzdž@LG7XX}VvIJg$+dyB30KBiϢ;zlU#;K@*x(mRny=LJB!n,Lia /$P}wsёf nzY7 }eY`\O^yy7Pm:~ܠI/Ts >[o]9xl\I6 a-M-iVnQ3LHY# <zef0c7:?-O7mj365 <%2`Oar8r* P/(7fho8(m:)=Tp=eTP2mMYzfӬW:6 |d3B-d3 Tku!P?׮d ] %Ԭ'|hšWq@XC!i ֚ 65><(l9?-7ahGIiiktB%c #07h JäecaӘ-2\JC,Z*c1:IG T$ qy](Ijl$diF*yߵzv!BPqA@B Ƹ&KBy;v-nyͥkÚ{dW@s)}\<"UrWuIε/raxI-IVR_cfHiu^sf0A(M4&d"&~iy%u%e:;^V҂V`гl |(x3l\&y=h rDX]FJ /e??=4H.|蝋qQMPtz5(^.O9pt=}M$4L~]krQ|^,Tn9K![ 6N7CEø.2?|&s 1%?R}腠?OUp&'DBГ!J =~p! I Sn$BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z$ۂD@ .?b= G<E@Zp!@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H}$ΡD6s:$kNL )G@0"@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@HXHsB$P!@\褧Beiѓ@YJ<e@X$@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@/ZC5{W4̡ux}\o_zä/z8}%d%٣@Ja\z ҝ܈2W dUS1WYZy*K-h`zG)d̕ TUVc7W % 4WZhuRY,@`Mɘ+v**Kű,Bs̬=%su:U׈S1W %Gt'Kp%++4]eqi},a/\)<\}}߆Lg5R+V3^-m֛M ,_^~U1M[|4Z]Pyqym&X<s J_Ob↡}(oAplbןp.'l0c&짲QJ䅴Q}8{JŴ_b> GiOz^FXEeM ns@b׵frBI&C(|_]ڍYjC Z1ӛ?>)T OlYʖ *͓ZrE+?s-*hJI(0'͹YbYH\& ח` (ja5vjm)9U)Uu!99K3!STRI'۔Y4lVY2_1I .ՕV3rITCeaIrWj<Ǽ_-{C''.ղIi/qR.?bqoSWX+\Fȓ1WY^-{J\@sŔʜe'cr{*K))hyN\exWY\N\Zc7WYJK4W"g;`+zte֞_%3nBz ڝŲ`!7;˧W؟\P\$%\>']>ݸq~]d)fX.Z')R,gɘ(9VJ-*E;RRBfEv6]o?\=ŻZPmP4ߗifg7up%TgTV$s'ƴ6U4*pSJir6鐂OI}0V>!ipVt6;=>C& U'ꑥ5erqˆ%DihF1z;&^4b͘[ky3oN~SowB0O<Ҫ4DRĜ3W*JI^}; 8G8:tx!k..ް6}ϻvJ" 6!}ѝ_?xjϿkWN8\ oY#9^/ c+J*[9$OPI4Aj^szNuTV+MS4UJa.g};*o!RmY]3SV4lb *4i idNoa$_>-J(nrތןUS-Lf^:…2q v+)#ϰ@'<7p7-apVx~J Fr>S_&~|=7gkd+ * )16$ SU>?2^KQqH;G@;a n_[2$j|.Ơ#cnB2Uo l_IxAOBG{?2E XoC<+S&:}N+m<\~R|O/a=pQQm:}{w7_Z@[ݛg\JhiM\˺>#1 4"*ɭ5ߝ-9j\q?t3g-~Z6'pq~iݛ`]K ˃P[*QB2)ܗ0&yQ> } okk7mpyKY5F[A}]H~υ\|LJ-\h1P3.Wyjc;w}i}] bnoޮtdIU϶0£Lcb)QW^Nz`V~~c~UL?qL:ղxqoNCw/1݅|b7s~uNsZGS*t铍J{ΨA[t\i n]PSvԚD*C+e:]$;@劅F/a@wKe=ڋlV]}S˕i]zX*sN:t殕Ֆgf^ +Kdc#ny׸{koBA.3\\]pQJ.K̈́ .#)q4 :3Na"{y R{˽D{,ݻJRDO.W$De$r*ǽ'JSLs1"i4JNKQ 1C%(wUeDB:Qu J銩qtkHT ÙKr.AHErU JEbdʧrE#ix*沬Ϯ[]|jGB.)~~ڻK[iW}tw9'dgTN_Ɓ<8ծ*\̿^_}3h6NKk%x>3g#h*,瓤X_ _F.~ L3AZ'rZWB/CCpMhɁXRz2']fvI~<h=EX*%wsl)~ڲ }~rӕN-f +iN an/4y4p8{~MǷW4X j9xvdn<Mf6xC >])CCTr} cK'm,V&#,]yf{ѹԏ۰)6.pw wwۋ9r7?\x jPҝa<m)ovuڢλ<䇕~{?Pp3ݍ;>R2C9/F qsS_5WoRj[mjj[k: =J̣%B>um^躛>mGGbiIґƘR%#&?$NM IE]yZ[[BwxPJpB& %@PT*HM \"uyx֢q|y5}c-QL#ǫ^~<㾟 _ko=̤~=-E(yץɃKg Th3d%|Ž,ш !rd b8'V `~ 0o{e^stw" ;B`gzrL;>6p vo(/W/:HSe Vkq&\QalA01`6U&@ӍMFӃ[<`D$jZ*y)y4LW֑U"9N(MQMxRQ.pZH`c*cq]_,cYW] }vW !;C (]zXYBv/uasno_\hξq3 ho O 9Ve8^u"J%?sN 1>@qcK7 &}('qz}D1c;Ӎed22HttU'?v:aD)TBv|m^` t2&Y]NVy^hyrȇᐙ= "@e=SNW*T' 'wyWF=nۉ#ٱK Z&'G S]z`ڢ\>EDL)e֬.URܫPYLB[W#נ/0iǟ]'mshq LSC\La-coE~;ik}p1mvǧkWoi6`v*>IJINrSdp&C\zYg>{=hW ?.+ǾF$µu=uo|o3`i>pmq$ Cm%hC_#f~1 "`PEsN<3'Q?f5هs*ԈK" &L|:,~B k_9q4]]S\9+M5R*%1鍘~yG  _2` hpPU)SRykz3mAC`Q!!:Hxa ($ng̀Ar& D2Lv0bOE!((d)r&Z 9[cWۙa$>-xSg}kE&B+(N ڦ@>!kbF> %.XP%;S;nݹ/ gG'`xyƸ|v[5ӕme40cնe3«h쥔^xuch(8r Z@UM tK(%30p(5T[v֧t֎zW{RHgpQlr^g2XR4 H!adg-.m샮9QfsbMG6~呩'4Ȓ"#8>?M777޾gܾ]֛c;>+wsYtBD)2uU 2` K,:"֗|dU](hb ,./ |!P*=~!Vޚe*1i!SB+$^,Bq6 .5Q{\Nt˼]Qf zoeLCd])F)>.yV'-Duղ&SqL>;;Myfs-M1I}PJ&RQT! 9ƈ!YJ&CAv)jdH} !UF+p]x"65Y"IW `JXĤKڰ7@k_>66?hl۽&ekgiy؇-!}Oz[ Mo۹۽|:,7N ދü~v<aϢ7Oi͉zs1ڤ7Xxxfq&M9 r0ʬ)D-&J䢂']P_/df6l0ݯ۱1XF2:\҉,߱oD/ <6ZћtwIhߠ񴪷|۹d( _<{К Qcu>_xw}ևmgټ>ք/c{)87?|~L'o;egq;Zr+ŝ{sKGۭι)|\ZvKoe'+oSgSYƕYeo(Cl5mYΓv1QE͉m̷U6م'BW3(Fgukљj%5v'PgʓW^@IaR$ 䰇TPF!&-am*\K|* dMdM;IX@|tv,|HF~\˺н|+d@}KN*Իdm[Ե.vFu߀17븄)lW~7!0ӰُZj}V Uβ$-w}! VR>]OzEGGAF3qڨi_.@LyL;6^xx݃]\S4 aj{іcQ NC гQ')FR8 SIyP@X8NYK!RV'ؤ2-d\2([wF'LJt1RIq RZ V)1 LJ;"$و6UL.}7<_1dXB4F/ddSX}-Qc b>@Pڻ^#\*4UNM.B$-AU^dq(s svz^\dcc1H?CZe V_ŬkShaLJ2xiYo־=o@[:7p QPO\.t PBu*˗d5NֹL4I֎(ê) x3Ķ>ZKVԢv+ao9z@WR0X1Y+266tBa[ʚ Ab P[`MJ5jFcɿ2佌!}N/A|$MRV}3ġFP"1D6k:`N[Ha)­RiC`RgG/6**b)7pWq'tsr~X hō 3h#!*đI<3['ńcj <>Kc} * bR ?1fiqX,a:1Jts-8Av_+_*@ Gõ ۧZA-1D95'Gc-!`{)%@o:k [Z’v=֞c5-|K[ /I9/gK`ͳ6 ?hO_}]6NgC=&ȣα09#IPF`Zŷw7qY͓wYR٭COĭZGK! 3jg)#_ň9XFXpJk3rRVt0tEXH SϮ/ ¸v FE I~}ͯzPp!6a-rn%bX)k%UnZ{'I&CLz>z$5hi/6R.׌Tjr֤{|\wXR=Szz~09v)~xقa.8 Jo=0kIn zk T~K L\p. 36DFMyA)΃jȪz9&r6R0(㎤¡R.k%#RHDc۽ZCz >.@^" )CQ|'ocbti,^zyV>:TLGJVhB ]ر$l>P%S](3 %-'eyes-nbBV2&WtTLk"=YaqX0;CPqiF`S,H%((PG!i5/uLD8Jpݪ5,C0!J[[X1c2b=6Fs+%ml 7=[xrSub4\ GcY H^T6b4E߀\iO~i{7ݼ^hpWd*")|4 ²8= mHerfa2ך.uv]Ma.x>M_^v8 Jne/&F45铩6tVrb %=uBRvf㓞pa3LJhlCK>PI .#;J ehaR\HfF:F3e.`y^MW^;`dp1ٺ;"&y¹xRWu6{yZ#|`: {u&ذlG7}[n'z8xpMfx4|H|"$e5tk1[Q %pI^KmЃ.&SG39.Fr&w*,F%6myoA^ xzhyivz[UnX̹ CnVyg_pY/gf?BҬf5jhӂ՛˜|uc^\~.ڸ/J0] X& 5n&9 ޑǷ"r/3)2b<~d$Gu( ڲ}εuC rH# be}0z@1тFQ4`pHn: Ќqh<Jxc|TM(LGjv}t;*Y$~!7ݏѲ;@)t[vݍbp Mpє%XDX8P|@`(tgw Ⱞ5gц␔RԶe׳w#X4G%} &p96;Cull|R|X=M*7TNEJ FHz? tva? OކAO?p*&#BXp7ym~ h]6Y#SB?B5yŬ{[?WFez#xTJBQԁ 2Q~3es7kYl̛;SupgmvRB;BumԂݙFϟM":~;`REMPEv_r֖^cޒrip2{=(⇅Rj3I34i4C''o^? n:SFӛ7^'ښ++YLoROWS0]5]0꧸+'׿㻄/W096.,N?$F'˓Bz]p<=c.? g~3\ޙ9(U׌ =3,s*2Xyj&0ͫb||Y;ֳ2-c=`T5[zJ*\ nA3 yUD׾_[-`^]jDֈL3!޼՝ɺeYB,kWխ]&W4.uپ0tV0MWLe꯹wϚjb w A57V"FMsFV;|t998)Or~΄yF" E| 2ڦi(0kT V0΢yH%Nɣwd|/& kRI )+䱤TR$HofVJDScs݄ƀ1T)b+t@HmOjEꘋJx->cmA:!}%甆\dTY/eT}l~?קs[Vjc=vY{$]%銘G1aK١wELXruE|]%dQ =s>zn i5qmr`IQN)93ZIcL rl_O;_A[\=gVڛ1\͞otJ*77ݬ!Z)\BY(qZR𯁩Pda .]<: 5@ /[dcǻ`y8ќfvq$hW¶3 0-mkZ+riiGۂ"gvP9  S:zSbd5(V幤D#Y*Rt#l].K;a)k𖷘QΙZKƍy2=!KDS*h1)q+DS{.bi0;R{JnI5 C/mY3oVf~7OdpFDaAQr5N[pYLY8ϴ8>%+piR_f:][s+:-dOU%<\$Zs,dG.HQKjZӣ&M@p$j@T\1)9\6"l c7$Iݽe-LIZo5C7 uhKpr&>ϧrZ T JM QO|STD](HQzTf)zZNl! Mu%a5I`DjQg M5Nyݒxux'pW򅘥@g{g]B"vR횣U@HU)Q8p G!(d,Dhn!DRh}BI l.@?5HQ8-莴+h|BֲRId>Am&.mKv;[zt =/OЎWaXF ؛f/rxW_*>yӕye^;Rp JATrʦSUξ qgҖivuZ9V V9 *W2Y紱2t11\n1c+TQEgQ`S59}&n}{|Uxm tO/tn'8B>)- Y' x->AfSU]PwP5uE!ɰv&݃.e޸AG)D"JpkOYվr-x/<6U(CXr*O(ZApk*RTʴVqaLxM X'ts#{\ ÅzW0C$4YlHQ98fH|E3Q{QA?mM@Խs}+EbIː.F?A^=,oLU獮KBRGX}ca8){CT)[4+J: 10ġ\Q}Pu( :4.1,8c0aֱ'6.- UxA 7qvdYk&T O.6ҵdpC p÷^Znh.1琵eY:e!,% "rT,WBNz2%TW,_[AW/-~mU"TLlBp(UE *LA4({[ BołjOgxh96^dv2|Qv,bx$yHӶaaX~%,`OfǗ뉞_94=:Q~mn{Vܵ4o)D֑6瓳n}1I4qdc?{neM}ylyyRG o~z?{m~_~s.6-'@|_ >C[]>z5;meܯGlf](&^i:/lz_m6Z-Z-jiY~:-RT1\&ͻ1ؑ`Rn;Pߴ7x\Ͻ&&_A#W̑r5H[]c0T_| u4UEhHD+ e|"lY9s&[hpX'T $k6Ps.!h]NЛv֡D U6G98?3ĝO{ggzMMX5PW)NK*:5q~-⚴qZZ۱S)M\ގv-.%Z.tɃuw鼏LZV,e5ء& Y*`dad2vh f@ QɸvUdb.UJpFuEN9F)G+ (}QVQGf3P*YQ UJ:՜MUt{gν&{@UV=q_h_4cBR:O&b6NRtCEpJeinjIЊ *Dv hNI1E,J!0gG9gEM?- 4tG^%L:+LuTLyvMڲ"뻿Ȉ9zյ|4k^ҫUaJ 3bVTj!R@2Ve#ĘW1TB U.cE'"&*Fo؛8[Jo-Ğm!ImFl oik8Fr^V+t||x%Fb1*aPTPTT BFb-1Ơr,䄎iRWmVihd/QX̦x(( S}-\U wyb]Cڽ}vjj# P,D퉼2q 4t+%o[GV/b:RAr; ¢bkU /[J#UTsH}wMԯ68"կE\oEv0M >`#W|U*((\]DAmZ)@1;:VHSBiL+E`["١[˫bJmkuvQvq[UZ7%i$C+9yI8OQ{kɭnSšVǾl~ L&+lK9ZȝJ ~57\Q,{O>7&kP6ǓϳZ2P)yyi]H&g1=$BXr.thv=uY5*B˘4V <WYR F"$@![sB,6jT UN.l)ũ&AS߈X,U yHYqˡi/goŸg՝Z<=\A \Ad+ŜMfԢNV#~ v"h]mIV1}k(52:d퀩XST`֛8+uNʯPwM;a:ݖrKq}>c[G_mg OXorqݍGi\s[-1uLq܉cZʝV5N²f t w pCu: %1 $h**IfDž2+f!@Q{i ?ɪ2W3Rܘᩢ uLp3D]HJ̈"ә[Mg]b2gڛ8;$\R9XN)4(3v>5',$λ k6GxK]k2jp4;dBxj7䡲j8/x5Thalzt;x/`.k*]6MhCjJ5Trt={N)֫ uc^ցEWPȯJIWUtN)&PR$IMul@:vZc'е8 c;֦1B@& Wqmz &MҁW$ץPseE&ơn@Na$(\9t:$v+Fao1|9hBqVx mXmQE ,"Cp M`KA'H>HѮiN .#C˞PBmIJS,`LѢRXպZD+ržk&η*p‰Dt)z4̞K&)g)hJj msOFey ec uQ9)k*ڋD89-1骑\ΨlG PY`l_c_ dh GgXds/{`nRh>de~lnZSHC.RpUbuVQ"(9ϙ9mm{]EÍ[Col|ez~4-j9ﭦ[}1v3-/ RmZ(ȄQEV: s%WH a׳aRƼD}k_p߄Moߏxrի\ g2&'^o$/m/gg'_f[Ewg_!9;k0o(?7:YP@EmQwɕe U4ޕq,ȧwdnx b:FЧ̘"߷z8<$sxw 9SS]Uj̩~Bg'sZb<3kV匵/5-ūԌ5R.n/(W͡z߭4/f_La=VFO~~׽h6hz{xae,2Ĝ.˫YP'75CoG,Weg '|̗z{Co4 ` pc}u2m>{1V ט&L|QCݖ,'\uo?_M4a6 Hh/j )L*JѢ*ě]reXhݜJj]U[Ƣc\WRZD"=\H>A{M2ǪLrIY1tjO0sHW[ SJbePwgi ώGN3LI3 h4}_2Wfܯ9~fua|A_'I9?֗RX_ Ka})e,hJ)/E֗RX_Ka})?d~`!@F1ydVMb!&qK$ƥlH6%٤ԃsx݋)}@J)}@J(KR)}@ +}@J#8DŽ> HR> HRHuH Q!~>2~AbnZ]xC) ފ*9S3Cc3pIgf.aۉnlk0>!:%EKf5ڸTBdT:kltQM6RrYCg2u2LwZ D佖豉hj iIXw6vuo5'^* ~z{^v1n'hh H9l2**J,ec #EAg*j`^b <* 9N]`D C1,,VJ\fEnE )e, ʂXHoN&ԝKN:oV?Խwp6ז .0w} -;NZI6vFӅho&{ٷU޽]o!#F@ё2-7N 1Sα)\ ɛ%GkQ;"b'8>jI93Tn˘=Jֳal0dt*-T.9J6J|3KS.Jܝ1`> *3> _b!(iSr.z2- $>(-Վ`1 DbSE#aHΞVXBI2TH'K?ґ0хT-v6vv[l;5Tv6juf.vk{ hjZLI qLDiDH-SR0r!3b^tbl r#|a#Nudmk>N>$x*1|t嵈 bElx #$ ʢ1ʃGbDD<8F 6ȥLE_$E #$$8pF"HLvJ "i΁'MFbR1;{d 0jb٤]ę".vm0eDs$v+`xSV 8 A@ ΂weǰbIGW{H2C-;`l5eF>m(ϰ^Q2OH+ُ,{zX;ŸM`0Ti/`U`3$B:kR)UŴŊVr»yDHN<RZ"$=6F"$cRJKDLq"ӎjP$!q[Is Mˆ2R%㑅HJ IBshCʘtfd=!'ָj<qsJY{}sXC&}7 [\aXn/Vy*o gtjmHKE$ 18Iy%6( AggUb,Yԛ*"F{0Ї"Q'#K9 Q ƑJJD]MS]vao}юDj5}ҪT^V2oR|]1#WB )y@CTJhgbZ qLT'O3;m6Q'|o(]Ԓ}~} lޣ݇hIµ__?AyLl\Lw3ɻhʪ4rW[ۜeʼu&T[An)>[TH9%àaR @i$#x8 NzAXT1Fiz8D"#tфbJG錳T΁ (B(*Lu*8O9+'Mf|1=X#DAbKltӠڠ@` E^%R O<*@I/U#GO-cJ[:!F. km!/yraJ7GYOORZYoC b P۳ .2g2^aē7}N~hdfrO)l{/.dz_$ٚ6#6ݤ\zAJ_}1͒5V~7o׸I_}pӉ~\`yZ|ud1$^^un|iEam_ˌk0?3!ajՂ%qZ'a9 ZZoC1WlrJabSdn/y n},^>v14:18y{~W]^>g׺oZg[udX6e2x\fֈ[;2[fhz^rl!;+-`Nmpzthz3zx|jJvAt0H0|ӻWr^o+,_8oLwd`zQƎj;=_M09\xdEMsUDGS%1_ެӹ4):07{/H,;ykDXvwA'8EM HW\)t:)!12,DcdH=q_bYN#S͔(! 0Ytߟz{4^J(|zLًY5c VT^I\1ix +cKw1l8;Sk!6RNYlǥ[gz8ԖrӞx-t2+;˻ulx;Οg39~CVaDD&pTsc%bD \9%]Ť,R~QqeQjWq~16JU!^٠LeL )K{NQfI)ZC9-Cʊ-XR*)AVofVJD9,Z1hނƀ1T)b+t@Hm'gquE%KfPgcg0HTȻuw5}S2׀ix"~?\lȬ}&vuq,Б|\ aA>iȹ UR[1`$gOzɂرcziyWX^>2*SQ Z4|>M}=?6ΓA°>8-`E?n \[ANnðe=i N,vGC{.`0ɼ{ܟ_n|v|-HqA`v4ukR7~[]Of[!rFnW U`he)1H4l0ㆡSbd5 i\Rk",Ah)qgzH_ieJ /a;sigW2#C9'nCՐƦʈ)Zc8|:I,%F,}^ոg7_ᩣGs|gbuE6I5\N$ZcdSJfgXMWQ,³J/t ^1r( ܀oL /.1?>,+feY~4,P~"1h$凈ՠ Lt'ǹm%BsIS(n+6>C.ӵjaz l=/ϹSsmN) ;S69Ǡd%Vjki&I}#U*gQ58y5r ך: 8ZvLd(UM*e g=>=>><ȋc[_{B >£ů3yd?yq.$A'd*ԡ-T 2*xKu\H?qM΅r!$\S Md9g R6sv)`1gH.<)_Y3C+(N OZ 8kX Ad)@ FL @G=|vyJ9>J#rcn˒m$PO c1`^05eǫσW{LgA֯aY=P r4@yƆUh4oư[t="x*lv7ƒ 76x7*$c8YE9bfa U^)ؗ][A 8Zr1蝈m*~_AhZ[_6I EӴZjƮqnۡ/_zŎ\miQ%R].EmLKdR63xveh_r1J٘6)q%s|0Ïsk wͲqTshk޴!@eE@D#JE(diIP-Pr%aW+zM@(4ɺRR]`IiI=V m3˄}?:J}?tE!)yd2(5){aPbu-3LAv)jŔH*}jjr_W^OdZ{tA"("cےVO6VDIۀxs@ 1v{S!o|ʟ^f?-q֍Eݕǿ]|wc(Ab~C-:~1FyH7a0f0f ],?rtOSlՃ_y춛qTnuuFtxi&[TYHe4Kq5wxY[]TL-%^^hyxNe:Iww޷ ˻VV?8+$E݇Fkho5e<]?;Ϗuƽ>b.ك^r dϳ>Q<՟VyEƣ%Z.Ve]i)Ygy+"eǧ,_b!>ʻKO#} 疀f|HTyqMZrW[opkn1;cRDUI Amf9oE ,-#=uae1 Lpj85IJLWoZ"cmNDEs:t&U3ڜ{gv糛^\UlYvr(3֛$Nھ;#x'; < TbfPUGJ*|OEOMOFm b4J^YXbSWWb rJd]I ^o7ϭД63na.3 {SWWo:y?2cO5>oؕ} XMFml Da[($lN!!R2BRHEhU\5qmijRj+nB֘+t͕HWi=<^ĘE<4; 6-B7-\EzgQRz8hi3=SK@~0G1V\3_T-ܹvyDe[߇:fv)͛UqɦBԅHQxo!\*\3||B^"fv 7%lD~ly(v7K~wa㓿k}+gRVmUfuo_v6a^lCu \\!2{{oޛ꽹zoޛ+`{s\==f u;]l<*xQ)`!fvC!6&eQ]OLt( :mD䟒QhʸDb4L>ql(ðsL FbmpOL&Y 768_t͆NɃlߓ堖4ᗶ=`p^|(~]7R0|t VB Zv1'b!yCqڗluk _w=MVU\UV*Wuѻ"cm D9d$TAXC^iE3h)#N7\Z[&K) +☨: ގgsG=\Vme44ΘJV# XRl.ʖF! *^o/e;n`\mhk9E bT9cI6{SY*`#Pk4O5]&¶fUy4%,VP& j5TbS1UVB=2Tl9B:bl<5Hтҩ?UUP5KfWY5U)oaJ6^DDdkoTo)r_s#+ߪQ5@:܋F'yU΋K J֝]=S5ɧ4kkW;xd8?]6ލG7_HN 秼zh'^lL& n?Xoƿ&.mפzDJo{9y+[y+;oe`5O[y+;oe켕Vv[y+;oe켕Vv[y+{NVv[٫ ʐ\y+;oe켕Vv[;q*/2Lw7|>R sQVg XVU)^30kqr=?݄z~^üA(DVTfKʱRbH^S '.XU[5¨.Ʌsr5LJLy8@:kf:-&3 2zdH78_tA4 IzmHEDG%(J5k(0B fUx!Hהh !@`Bll`& jQ 9RP]-Q D}b1^=8sxB0A AV%LlBױ iYV1Ϡk_y:zs0'e9ytwӁ€Fvo @*˷Cp5$\Tj`KZN^,YpۘQԢދ3UX|*$Wzhcښv8oKVYE;ٻ6n$)Hx?T[r&$e ̘"Vq "e(Pʃ`zݍ~ p D݀q'xRAp{%8FQ  t*Yzch9BTyp娍uBr@d4q鐏M4&ȠOFfbܗAdp 5Ye@2D p4z Q!{W:ft6?!Q*1OG xLXFxDMVIF<Ŀ"ߕO+T3|4JDrh!ݍΐr3a s"tCZZ;rBQMTLg4x_%mu*U"'2d𐠅7o'cw՟] gNz ~ێ|{xؐu1>d-ؗA>K,PQrȸFEL1ja7~uNK>zՖ?7ޞ͟A&Gu] b z9ew)Z8Aqfn5|?] ׈Uikp#}"%,3.Z>kѶ)lÄūA[b|A-eD谰֒V[i'R=H%%:.x_bAIf?N|R@2Hѫ*F9NavO?\v@yY;9eGpFìjN[ )$Avԡ-gi-5ir|D-@rR!Db:Q"u;J~*.l:+FΖ}@Q3>)*NN/+=V"+z+Y)xD&$8pnڸAmYNlJr@d2K!@LG0*/ךPQEqb)s9WHNbd'SrMݻL95\os>i]im)B S>MڥKtQT"a`$o e/M\X ᘓ4'0p#WY:)S¨7  F,\hG8 QQR UE h1Y[b)½bᜢmӋ,&}QI&iP`t5OrFa0Ȃr!E2=dH!~ ǎHYD!bӐrJ30H%\S#Ij2HQDq)8IeD 2Mi@lL 5i&4wKiblp=⤚1!:]qEbkvG$</EH 5!^q(DV@g4ڡ.]CrOGqu_=r  ~%%E\Tޏ4eOkz*äUqF"A.ɁmCBk9D)I6pK @q"$q5ʟ+k eBJJpŅL&'-L%ʲ)8R1%b ػPfCrҶ'$9?Fuq?L 42$b3"J\2H21U?U9^"_Lr`f)J3!x=b,9A](]9w3\%^>3s||B՝=>EX+:OAJna1+_i2!'~iNm˛O에[p{qɖIm*'nPT<~O9Ex+q6N ?WWJ7Jr_z2"bw 3~47vvp6n4;@kj2Aa.f>_gYC=ٽ_j J M,'sr!9$knGVHd EqKF&gÑrsH@-(+{=%ڲr.7Dn$m2B֭3W.5ν7n>R5?֛NtfebG[VزuݾMI7|vlE -7h7߼͚*r=gq-Xhlϗfmmy1MC1mqw/;[ܓv46C)#D9+JdCfpRT[[hDU:Zˆ,ۢ%9oV `4{DyVc;Ӗ%`X?Evw!rAkkpdهL)Z@eimYj_M?kw[ww3bQ!$ R\]JH', юg+ V% "Z-C 9E.&Ņt$+k`M)i }F|x)/O%n K4w;esK'jc"NēmIEMH2ǵ@P1N)Xfeb n+0tQJkv}7DJxfYO3?9HuaSH ~>#1G9 x1/oɗ> ѹ߾wIH ^z4/.k_~1DZ~vSEj|񦇅k/ӝ]p|Cĺ..4b{U7Ǐk{?XN{Fs_:NlNotq~%=z=5S322i>ص׶FD^]ӶEoeÿ#wP Y0Ioi|X;2b(&,7!Lgml;2R>xʹ;j~8{R_ͫN񥔋7õ?Fæ{5B$Ǎ=GyWz@:fn/?%v:/or M}7BH,'[n?&(&)p&V(Lӆn$4E;bneX(/m9CI%kh-Ba}@lc4d52Lkf(a ;v>?l홌FbEKIxQgc$<$}rXE5:dBj<>9 V+Wdlʥ jҤt8lQ%&"77qibwW 鴪M-Ryصf*\S=\xXC71'uOH-b_) ۹ic(f!B6ɚh4v*SDSl .JwN}b)wjV݊{~\v>Ǣ7rrʯFr'vrx}ˋߕXjrT'p@)c3hṋt^]AEo3:^$|9 ڿ!f:!Vz#0/|V)\O~ uWڃMZnHs*2<)bR1Erɲ[n2Dʜ̈IGN8ĸR]!Fed"Ts_ gH1U]kN2zd$uB?C9)K8Z)]Heo}bcU^3?o=?7sf7ʃXcoEw0 wQ4rmǓuSͿL rexAT~ ƅD<F$p}vHZ5^N&i-Mʫ}a q2DJUgl{#7zuidN~UшZi$rD vJC.UrRj4TmRtcZXE ShW[U߈c4i-8Ӻ7-F\q;%w/w=NSg!־B@_PJ JZ&DGL<dFww罽5mzJ![5kk4p5@MeV qC.9q.5U֡ɽu+SL706+<7FNk DYIJQE?;8ӁDHb>1t*S'>z{c< B1t>9`'YdrQDzȐ/2di|z'{;Δʑŷ }Q4R67D]Y݌쀎&E[.rq.1~OWNE\?Unc1X{߁{m/ǜiUoTgNr[I}XWUyߥe~szĪMhco>_G߂Z`۷.7&yVl^*ـ蘯/駝|zm7oQuX~*ewڞiτUVgP 3(ɝz NifP SlX3(6͠8%dW;X.B,"(}b'\WI|D"U;˅hpj WRIL! ""\`c\4bƂ+V]" S08G\`D2 O0rW J/f+@ilL wrǂ+V&t\J+L qe+Xƻb6Ĕ#(T.7$X z W,׺Xpj} /T +u1.䢈ƻbZ+V ɻ#3*kI\!3҃wn_PE ӧjk2زƘdev]oKs폘k gʁ,"kT2Ey&+UW&t2Nf%utB=ۧ_ُ0 z>ܧ>KF'}L VO$rɑZ-l*Ero&s=ޝI!\A45LIC*;>$\W~2."\`}4"X >d;j>BΚpE@+ &\Z WwV%%\W^ZoǬkRUo}jkJU2މ>Nhwj'uOwhwθ"wnݭ^R:~uZ{_İFX@__dyY^n꒾>gE>|\;',@tH~i:M/t;s!~E"iч0w&'9+on.Zpya!n/ۻJmmWǻZ<39йů0ۚnF,Yux gQw_Tu-H8^X~8_<[N^mV ŷ&a~L]5_Nl>֋zX5d?K/Fuo] Ru}?׋#T/VPM~[n󋚌wQ~lmq@W;ެVʛSG-1;zi77 -Z~gǘst6қ6o8P|[?ǝ/:C{mxNqޡsHV`yԡQ^q(;.ٟw׏w|[ErvtW$؃W,w}Bq*O!bH0ZW!|bZ%\W9}D"d4bbu:t\Jof+DL"uBƂ+V .t\J4 W3 )" *\\M0HjA]F WJjHɵѤI"t\J W3ĕpłW$e4+VDViS}B累)$E H*t\J9;;Ә[ACNpQg$U?V hh+ۿ$O:rBU\.N=qvLgo}4>-6≮2oK%KS/S~ęJg ' {yO NTiJ[% cpLKq/ W$k X.Xpj+V!jRɻ"}r,\\WOdJ0 W3EpEhprWtb.sĕ IUxW,Wc,b6tp5C\tH5B4b|0H* W3ĕU+ȵU.\ZWS08G\9Q S" W,WC,bV+V| qc# L W$J XUvvDH&"\`kwrOz_+Rdp5G\9-:=[^R+"\`wcz x݆UZp5C\DH0Ă+V2+6T; '(0Xƻbޅ+R +祇pD+ *\کYb՜pTŅ րiYG%yOi'+cc^b󀛜j7݁ÇQ۟l$ 5훽Amk{[V#-P7Q]:i;rU曼Xw(?O俵q72[oV7d,ޜ/ۦVV^{://6qekD~F ?1WǀŌiEjƜŌ9{#K)VK1;GPLG.9)(h3nAagy]z}[6&/+UK`S L&Xb LTTPtv@[:M>N|4ˊCv'ey2HPTP!e(%xx yMp5ϱ%Xɺ*VmsݙkK4/GAPO.5є֭وn1:*PPTxt/AeX vb[B{աR- 5 # l@lcwkYRpPT{B;}rΪBeF|3c9\A3Y+/Ɗ6uC AvIkz V R +V;G 2M!6ȫ` `=ePFeBE 6]o1t,,x$0LX;&q b j< n-KBa@J#D 6P r6љ|6B@(fetM=5T g @QŁ.єՠVY2 J` 'lBR!e]ŊVl"uQ8UUDI)bh=yq`+Tn$$Ɨ*ޢ5ض( ʻ5tYbF TD٠ `F ؤcEєe!q(6M0BC>,:`e}y.@—*>Uv]`B̔ZXI| (#K<`BJ_` \,g IJ3Bk]\e lRâc:,Ohv=UBWE{0z-4`2:URu4 F:]^K9ZzπyILP0ɚl>A& q':Cp䁏 Wǫ*YȩՏ̀?oɍZź fcvMl4BNG/uKM_?7.OaJTM4Y`C y#gaC+~uizX#/\PGY ANÔA>]K q2GKշbhǎ@HA-Do@wHB^vê14s'yAr $"e#{(q#=/Eq(UIbPZc!ZP {(Fw p.*Rvb`+:dA ;؉ָBŸ?X+WHWY MS 5wg֊o; zizfj`?BZfB[Hn|}Aώ擷%Ct(.ẽlɹOKAh;U/&ni5‰J;Ή-j =A>JHn49ba\&Xנ 8EژZ'y?Wx hNa8hcyjoXвR;kծU5}V(rd- &sp ބRmTc]Z.l}~kPiRvrȍCAPcv cT}(ub Ϋ.Mt vLZr +BpBS+.z|dNzOQ'p8lC}`;fA G=Ap&/ IUkRWI˥) D,0rj;VàYO C|Y.X,vSVZ ]\ v> qIxTPsLTq|K6Br^}ދ[nuQ|݃A #~x1Dњ${˼O5[Afg_{ "{qݟ.$UkAv[wwcy<&Dy1vsus#t{[,C4cl+p57rۋLJ>?BlK0Gǃpn8nw/8ù^{H21Bw 7iϖϜ=#ߞ[Ga7n3t1i@AhH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 I MiH@$&4 tIu~^~D?o7o?}~=#X:t"O B% 2c9&,`rX |I&t7!'xIOaw^|"{vO_O9ni#|]lu?p03ڷM/dN; 3+9O-{*9ss/a2{sDV*3 a$%S3y_CXrɄ\~=+#e\~e'<_=-qrCq99 '}`= ɇЕ a+t%h=v#tuzt$6DW8fJ2mm>zJzN|)7CW7r4NW2+| ѕ] ܼ3ѱӕ$vut"e6DW!cfJЕ JPrR:A1l6DWl ] \BW6c+Af$b7DW7CW7nG%Y!`GQW7ƭЕ ((]]eGe&-&?|f2ɛ()A`D8nvm0壴 t'JJ!n^}/oS'+S'3lN!u5/1<i=q ۑlgsS_O'}eAp|z,)=W9B+(zJwdWe~ԃ1+;7z!PtutM+on [+';zǶR?BW>o_L}\[+AñPa+ɧ [a3t%hӕWِ U1]pv3t%p)mmc+AA銍 7DW8m7\j.;] JR3xtO6npΝA͆BWvAF*nK0oڕ͛+ee;'IWG{֍{,i-M tCr %N܋;丈\CCr sfuIn/G['~ǽ}2wܯcҞ:\-6IO炖{MB*$-Ymg)^Т8q_r/zGydO~>AR;B\Óӭ'w;ub8g5q뛛W/o)^O4BU8zI7I[qV/p[ﶮ'FnD,b&GIusd&% g꘭IgLX]ٿ m?s*6>)V /8.H(ʃІ2# g .$,R J 8%f"sl)LQe1>^C]:uɊz3bB߰crtrpHe`g;嗣)moEyfznXuuT̖|&A?B }|s!~.ʽ8Ɲ=|]ݢv[OzPJ$9O ĒjEvXȣ qD1 o$<?PjK_*x?LEp&f! Լe/$ZPaS$9Pl(ݭBU8d?~Opw#nnj8l8L0:K,a0(>{ ѻNóU"KͮԐ7pԲyA*:SbY.x[%T_k6ϾlZ* ۵]⤟zbID=[ˮpֹ$J={Jv*w ,-c={;qrb6SCHk]9Si]lB7vOˣ=.Nţ:o£R&D S˔~abZ&g*H h+ -נOQ@lfJq@Y8}@ȓBXsnV&\ѿ ȹzJoƒ}g|/UB*'{T7j* M~4Ql˫^]Q1^/Mr=ʒ0cpB!FgrI;$36&> nIerrFfDz6&l"S$meס T@$ЩQ9"DZ!##wa,t{r$E"k:cJ^6,gsrɈ1vt;QCD tVl$Zdn KI ,bbBL^w4+_ /z?{!.eb`,0N dQ&/J.P1gOjr )y-0DRghrCѲKL -%hJ3㤖㉧\w#<5TGZsjr׌'\Eɘs#@h| ""D|벢5" mXV.G暃lE 6GL}HȳSA|tV<5h|u*+apbt>-5 Q.R-| (5('6䑒M"xIS~ɍ %!r*#xB_e6IeEtUg Ke}qwQ%U{o|w6ŃJQ?WgJZkϜ97ε>-KwKhCT.Tx6/<f^Us<=_ t/ćowHi\-)}͈͈f$ Z?Q,V|4\ziџ9i*:}}Zoq|+) kbG$;)jx_TJJ ?ם;ћ7Oo }OF`>z0 >>io9ij[5-:id~@;Vv?vkg& )_w~x6LqDVKGYhXMpu=L Mb~>5.&UnFH quY~|P/'x[¬;~V:?k0߸*qɳBl "FCfQD"@yI% $lAKH_:ذRydK!xkNYHr8KRD.TNC>{Ԅ޸\U* W&Xm5FtN*1J^Yy):7:>):jv'm{uBSi;IY pZƅ\ d5M%AT,g0uyE^cpݙs]nM@90#IyT1 =(u=5Kx‚[u ˱qdAJi!XDV>;e1T䙊dzi\ր(=`bY@*E [ιE.Lf08jM-tBj{tkǫ)qxv\|Օ7Քc6d٦Kuu?-Sbh4Mp6^槎^EWϫN$C3T6 zn6-yùt?+`/䓹B+ ZmN B  #yy6Xlīe=r<BcR#[i.Y !Zz=– w[.ɝiOOns1_#ɷ"*;94!L_BY,Q@#U@qi<$K<sLHJiom=Bx4ǫM6- r10d ,sT mtPL@ 91Ȅ]jn{ҧӑJ D ]VYtF$2a*ی+r`(=8aBT43N"wb"8aŽ.#"'; -xT-W[1IF1i3"&iJ2K|3*U"f"sZ;!"Yz jW/iB˸äECa@Af2,G*Cv.d2'rf1:MZ) OQ$O௜yMf,0%ÀA(/L63K-X-k"iǣTvl5 c+ m|-<)0g϶*퐆ճIK+WN0Jd< |,JNT*WP=Vc:pͳֺyߟw,١S%^ƫVٵa-}>kDaYjTH3-\@}2%Y]^u|..ޜWd||jSNx:qؙQbߓOXNik=G߽iMbn9|7]} }H`k־<GTovT6\݀ikec_֖AT.~U6U֐iv0@LTŭOyIBh1X=rg[̕=!Om1W5k4r*tHN#|[ɢ*x<Ē.lґ8{޵u+Eۻ{b>2Ž7bhi۠q#[$;M;b̑읕.+Ϊ{>` Gꢰq *E/e!pl@17<(q,pfu} ZQ0sMmH*I/HCOBB0W#2* j5rhyx:l-,Mrε'R^U|{ ")Zu;?!ē6/ze+B8A~r"7X{dKK1ZyL`ecN2$a l`W.h xch#LXi9a(GYrA'˜-ɤK/8LQs- \&%U[3V#gjX.BSYNUPTm|Ur&K,J14 hO4|쟍/\c \ȑL/ )&2&cyTiȣ"sӕljlidZbXtR)5 ^CDa>혱YM`u)vݸZl~pLCոcWm+kmi7 j@bVL+ ).t)X]t*-W"3>J d r  kbbA$LSiĜYEȨN\e}X5K,E#V5`u5tN#vqU;$0eNHƳJ>f$ ާbIr$ U551@qR ͺҒNgZdI @C R;9[DKS{ҋf*\%E^Y/N/vzqLJ 3 ,G<8n"!*QqZ9$Q <O֡N/C/>CQYTMAs>#7z`3JkgL(-~Aяf.ّW]F 4פ+B[ <[LVs#!72C@H<Z <6F $"z)QeAZnX"ad.p2+ݮ&BSA(4HS/AeHYrSb53AZ+AEAs 32c:0\f2FΖCBs9鐢Nh>=dxFm Sjw ]hfQݯF [i.,[-\ Z!X#jCVX { i +xHY]4~cDt:Ȇt7%t8քW\)]ROd5_||p [SZ9Oyi-#+=vH뙐"`ŏ'+Dj>ȟB%.+pf.!'K?m}lÔHa4lќ Wɪنy yKS2kMbDһ \KLڢ"cDcR%t$NoyYVrNYr;#/eIMрsyVFh¤ȃ&e9?mpw:脷Nr5}4iE擯Yo;w3c0f_$A?,iѴLcmN{}ZQ@rIr+o,TR%<&:/>cWG:/4&Bt:tYb"0fI2̱֠;su^{ )W: \oit;| ]?<^NO?|x_q- Ɔ(oΐָ+q8jN碥/j. J+ET!ۤ1:J`]Rr"A;nMfp+w)R ) ҺYVʗ[:@(Ƅ,G+P#;9݁ B%}M)Od{gS*Ű>z<b>ĴJ3:)ZF[^2=J\c4 e=Kd*e$4#K&sL^:!LdU%vI, 3_mJ/}s$.2b2G,+U^2FΖӵ!=%΂TK0Η,V*sg&+됾rv˛j-j%{XIwmeڞgt9jFÓ068|t8)ӽ~'WCAq 8 =.6*B!J.v!|p@磨X+h3lߞ$6~( ш\,؇Aچ/{-TBC' NxƓXј#nbF ?& gdͨht90E(a)Jzx_!y~Y=ׂ]n; LaW|WyPCU=ouݷg<b&=7"|}ލi{d7*k˩5%r-iP\b֬ސ]ߐŶ:AF74}2^?=-hä7cg~jȆFn?7͓O5ynz~k]]vVs:Dq˭-gφ|SY1Om^w޶g a:Ox@FI0GFY<4BUFIT*.4ʯ0DXT ''BZE6&96 4>X(*Tg|xQ(<3j;OrkcVWIJE\uBXbc*Q7]ivS:ͣ:_6"~ƭw2y(?|xL"ͪo5EtezcUĊ'쭑h1f206cp @& E&(vQg0Yz:4)%^ֱ$פdg8r-jHtV{b~;c8yfqN9y$La"ɚrrHQY爌ET' DcSOoЯEhBf5CCz(mٝ"'^ċOݟ.Ld_O:d h-0+VF/K,E8]Z jfŋm -Edjݚst>6_gBʺ; \BJ]Ⱦ\AFih)%d cPJy`Ω(<툱KCYhvW ҃|Y<+Q)fQwC8>qd8Y `)凳p<bjN䦗NcO~Mc"o&0w'__YE m[qQ+;`-b;`ep [}C2DJqѹVte׮+Ĵjbؐ c3bddWFKRau$tȻЎ ѕ҂^WFqu%!])phFW+LghgWF)m;C0Ŀ|:uW||{tvX᪄C\q?O?^^<ʹ~O=m%Fr)ݽ.<>gcN5Gmˡ8䫽x>_.=x;ts| % ~_S.-v>}:7Au=s~!d^9N>ol870$H>xπ.8H0{0 ǹ9pen$Z|#u (C+o z$^1SpkEWJ (W]-PW>jAҕ 5+Vte((㪫% # !=0ѕh(])-<#(V]-PW%jHW6v].7Ji=U]eUW U`tҕCfte̳ݪv]eUW Uʮ84+%lEWFPRV]-QW"'稝 C+2Z>RJu"uRSA].q+2+jC'v+ B88 鐝מP.4ʧҴFiEJ]]F׬rr;gyͲr~<<2r{X"€*VpSm7\&h)myN(/ y]B%4v5 vXFAnCYFr7vXKQB.Z&Hz\ݍֈ}D>4ƁRȰ N6]d3ໞ|CzT-Km%K}i~B)@>% rC'ΧWL8ii9h fzN>|]=УGЪT0%}GlFW}+2Z^WJ窫KBҕ'hFWK.+P2UW 9︥ ٗpS3ٕz^WFnu\Ғ8v:[ѕr2J 脸gW B; ѕF(kNaՋGtԌٗ=VוR[%*ADtG5֮+ÍdWFbR֑%J8teJq94]mJJ)ફ%ꊵ/ТaC | {M(`tZ%H;K5Y52% Rn+x{opM #ht`{yϜb`ёw1C]d,!Fa;_ 1knԛWiu[7mͫ_>" 9eӄr 6sNZ=NgguSCL _;'U~Τr_GvMj3+`m 3wL\erOɡ6L>{On0Ͼf{rShK3J=ce+hdBON֞Cbҕ37+Ý{ZqsFD}rt])p]nJi9p2D+xRCR`v:sh+}`+Lkgp RvC32[ѕε$Jvkgp [ʮBJ.P32\nFWJ .֮+ĵ3D]Eѻ?7+=vv])nZ؆Yj=6)GLBWL(-A{Nޖ:?곁/ j;  'ҼChq=y'RƺDwk}'ՕGѕzlEWFK\mV]m ʀ5+MZѕ_V]א7B].Vte\}g(EV]-PWB50 8$lFWoEWFPV]4SO J=pgk`-4h"ee{ z]E>JѥfteZѕR]WqĪJ۾]plg"fFGү#KUB"tpIZѕ>QӏוRFλZ+0vF ])mpՏ ]-SWlsk]^HGGK@[^2ܢer510fU_,X>IK\ovGV˞pAБq䓀ϼ'D\y4ڙhFILړ{h4+&nFW]m(V]-PW37+Np#+i ifWΠv*O+mGWF+XauE! Mffte­Ji\+hQ_u]PI]32ܹSFx]NurtҕJpchEWF+?jWkvD]q Zzv޵3pΠb]WFIjJ1nX]T䚙`Xv7iRȆ1*wѵ+?C "99eNBg?߾ 9}pN wO6,/䈢 no_a[ۗ'G!?~V>o{>zկZ|Ѥo}ټw=x-VW9ȿh(W];jlUveEHUT]H(}a>{?iu aQo|f4;w*tב5{o6s\ Ű)/?g_m۾/!{Jc3i &,9@N(0BSCFl'_].P#\Gg ]4]bhEo?\яzX4vĮN]wAJ]A/!Q_XFv%uiņQ }b?,4iؗBFcX: IƾE.{@ND3q铯pP k@t9 =66daߛ{ZUnK0rsJ Ekۍ(ɍ1F.{}]uma/FY ; )kۺ^kc {ъv-g O b׫ռ]Ab/I#Dz8uaeA| kP "{{L ^Cntqp1EzprsF5fG"s 4u W^o!^N=]ˠwmQ ee?Ռ{M@QHMĝ[&x.VXfQ2hF4vfG֨]aƑlSxa4LPfy!vإK*- %4q,YC[PbՆMYq%m C uacٕO4ACX{􏽰.ޔ6f KلmCem5ԧeԻwq׿BKw龿6m$lhR݅{3(6%Kf,NlMg<< i $*J'0O}`~lqokU P * +5+0Ȍ̆v6սl\FCYBpP&׀V߄LJSO%Ce:|}Dzf2*zbJe5͐jPo]\?7(cºZ= PP,Lh=4F[(2χPz[sV ,,xta0wD `L `iƗI0ά : ~rP`=:t/őYi{ l*3[!(h;H`R#kyGT),d~['OP&Ðz\F+^eFU J Hg_R]V}FӘy}W@Bb|M6y-d+- JCMZomXEЇ|p.h~!2(tr@AŌUDb֛椡1&PT1yQHa:I #N&}rLЕrծ;٥)-_oVՂYڦo b9PT8xi?olJ?l;@V*&{H\C RUFB5L:f~|Ψ A9AJ$rDOW2MʴN5<@{KH`e YLZ[]hx $nm }KUS ]_^jygE3ʃP nbZ1(ک`U0~Hv?YHև\dS UKе 2"(QwPҧcC^Q _}C`8BYD0ePEy(%%l-ѧGk .IҎe>h'K^Bk֐h T3ڄ`qr߲=V> σ D"tQ:ft]`mJ$1SP] !JP {õ=`\2 'ʃzOQSp8%-Cɵa+ΤQ[1xzp4D;hͦrܛri J.YxЬ(jFp$~҇n Zf ѝVq mJuEB`L ΃rt `t./,_ZܼXKaU;eBP9}=Vt3:.^d6,={m^`.&_גbn(rY]lF+onZ|5]9ߞBo>.',+juʟ #/#DiYfKjݯzڿٽrap|Ƕ niɯi61:_M5ZII=Fh}:eRK#TQ1%"' ގ& dUǟc{d('>Oip@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I?nȡ˜@bt$$IvGJ9 tI 4I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$ '΍) s(x@ƓZ> Vs@R*'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@'K-F"ގ& DpKhǞ"Rr@i9 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@$'8 I Nq@:$K .ً9-5׷￁.z,nOh"1tpfD%u .mTG\[S.s!#+r.;ق 6tϓE858onw7ޏ ˳uTnf(+`SRIHA/ͬi56MfEVV샟8LZݮ[Cy*@miOZT*tB}G~~{Yiu>bX@4mW~5]QvK #`/F(/ɫ/">7V#\ +kؙj߶Ջۥ_;vɶkNۙQ ;'#Rk\Z#FEZwPFjZB1KF ]0_Vɣ7ObعDisA[tS5{9j'0G8]nU7DV_Cmjihpr6Tg<;|w4}w0}LGb9&z3'։a' 'ż}^3;|O-+G, bezumbE lt<g84Ye|xv(Zg kǡ}ҏCq`X7 ]HwC]GCWk;8F;]JfEmt]}kXytN+Q|c2x.\5";]>LW'HWQHDW9kWQ(GCW@Pjf$*HזEXE82*5{k?⸣]׾]s\` KݺV`1}16v?_gr`̟4<_,Wr蹼ln- VzVWK,`OX ]nf9]L6ǮW4oᓪW^gpUBpd?\m˔c/:FL1e<:Y!ۓ i3Z.zkJEe}$ۏ"{.B(>=~׻urB:ǡTGvL'g=K#k₷>?lͳ6Hc*/1hΩsX_tѭ ZcʮҪΆ翽j/OwOLn%v^xי^.)MIT {Ӷ7^zUf6V;.9twKmߣ}Eŷt-s9ͦ(<0\Gt^j[z3ۜ;<8[\^\,JͦZJhhjM 0ޖ,P^W`w`vPQyY?E<{S~+QwmQ3˻o>z3Cܽпdk[ w)-W7irw:~sئmvkM uب폛 FoʨΥ Rs)[o{Tӫ{><'MnsvdUjךzm6K!1mI7W8)v&k*v(~w(w+L|FϏW/QVw.,0lXY>1ݓbo]b3#ae7]膕f=RC'3̔ vՔJ%,;T⨞$q$9㢮s? kLez$co|qh%!6IĚ*1]Q Ms N.*U3QJp^Kt`Aw08 :crPe/E@P=|C3KnXwDMq7olSIIע>$s͵۴)QԼڠgf x3EJ䊤zs}}) fE dl뫲,J+Gask2km#G>mQ|sٝvd.b1֍,{-9_##يL'rb[ݭf5*VuB'W;XDU'rDZĠY3r^}4bO88V0^{Q}D 3Nb5{HK^]"kY$6 LĈY(8h+_)=W>_Rň+]̘Ȍ 7"Ǩ4*9(r:d/ 6jp1Oq/к,"E #jS@hs, aK)at"aYhe]i8jO ~& B3D27Pؘ׍O"jDj$ AED+_;~cf,r·NuC<42Xt"CrE]k4eJ`Ƀx X*t1z=t&7Knuɷ|NHMRs֩&}NQJnZ%o>6J@JHtVP<J((?$~]-MA| (hج=E Rq[2EByHF*rlJmbo+㜌DR)588@bnfR&8&-edAdhJ[gJF~s/2znܹcfy#.SrIjnHAe+t2eɚK(׏buC{V|5fPͨ%y]q^BB~ զJ sg(fw4egͺ^\]L}ؿ~4jז i!qYGQ[ưkk6bdmwy<gڡb/e (L A$ is"(bJ x2#;L$) Fh)DFu$@ ~.^L ӈ 2腵(u4f{OgO[ѪI|eD_i1Mh5S %Yv9Lfe:yw[>hv1*ٻW~ɝϬxwd흷fH细QQ6f4`j/fӛD?99,z8mwãrרsW:aUHzN2jc,O>e~rSV2Ocylyz3f?:zӫ|o?oo_׫-ΟxlNvd~} |2__Z 㭇ڷ|Մ1-yǸ͸3;H7_a߬jGR*JAZbIRhPxT^ev h(ecfȖ kwl&‘\f#J%[JB mI7:9ABrPt)MaMɚQ6o$s7bª!mn~)ELSwq3s y';(RbR(3Y .9߯2]f|vz.&gۨ<(萁'6OKxuE+2նqlzF«xp%bm`,c:DʢaP@l%P&XTJ3.wB5#qpt_ QQ_HtP/h &n2t[(Tr-*N J--d~y ,>.wwoTrA21UsP93SBAPrᏼ]OwD=?PrA$kUm_ٷbxM% aN$R -IZ_LIEq  ] ӄmtA H2HL5Qɨ {J6@^NHw|˲"FXPҪP(Pƚ'% r`u&7,e+O r$RA<9a BTH'DI%\`w>dqJ%(ʱ,]L4xT,WZp˘B֟h=Z|)ll(#-bc [*13y;"ePd^z V/F>O8p;\wh:Zlց+lȶD?t^y!W80A|[dv΢A'ck0A{o99&Q-Kt46څ66t@mՁóhg7P:Fɖ7B)l(*NXDcqmf\;e;b^Z],DcG.mciF 8" `"vFdYeNH)AXCX7cxJ(+3՞lƒPA&NzJ%tv"}6%_!N]H{yq8f rl[HPrgW Y=[EUw3 cN++WjX@W L3Sꊪ:-rW:d( jRcSp{x~o>LgYopVp:{]%k@&a_AUQT}5$dd"c+Ygdςz!LL>Qom3>f|4 =ѨvxFh6Zu.~`G'o$z{߾zwﳋҾ+mX>OUx0ʃ*\U0mVpum([GnIeJH U#-q~=. gnY2-H_9ٮչ*k3EJz+˿b>UsmZmSeR {?*$iA *5)\bgq22A39hQ$ug$QN uS *"%XR :2uơf~PeQQJ}wjŽ6Ɲf~O0,I1' eѺKd7,ĐDq| :qT,-K@ HxYbP"Ab!qJ m299'<6fừR[~Ex6>l'*>=}xƔW7cUpE_s+B>ރ;lҌVޣ-PJ )Q 3iTBB2mD*Yx26Vb/]ɹ0Md#sR!%LFAt:Ji VR5c3rmUfq.ut E ؂T:ߺo\+rboǢqǡZ;4a}P>gkX0RY҄.KգF!#;\h.";C{ѕ@5ZEʣeHu S.& d\a3rׇS_sL&b<،?ԈJՈ qЈc$娀)I, R^TEZgKjWdiBX-C8PꙔk(,YYiI+p:: uFo3r=#~Չ[ 1:qɡzQ6֋rЋ^|1'k"ΑQ`(fajVU8'Z(+Y0wܠC/>lCX3MCtG/bh+JW,(=ǯR{~7~x?n;]~uBv`w`DH sPWwlSZIo C"$Bjq䉐!u8!sIk'# AzAI#QiaLN#vMMŶndA -}, ieT́51Y7#gDȫ昲OllC&8 wly)ͮ |9 ;r>cďl^,J]=@QCJ*,dȲ˺\{Sy`Qq }8 Vjb#o(Shy8i[:a3C G/Ij"&H͝MJϤp\GZ)Mb2.НD7r8D~ O[Ϗ.yy{eǶMyL`4IÔSpfb^%Sq2o-iL[,Jz{!7yvCj^5/XfX)޶jVUvN3=fʵYW A>q݇75qT1w?qL|`24jvF8*ZTXw=P*D=B \ib]PSvL"PjR$%eK9H哷E:IJQCĜ/J^q2t/ďo9>)u>wF7]ҧ:8g}1O$b*b:T*uMiCLؘ2Bk p13o$ډ/uK,R\X/ JSŽ$DQd$תu{OK*V4%VRrQ 1(B Z )K$ЉEX-Hcf68;N@ApfApUB8Bu`rp*w%:T}ݱNW3KBz?ͪ[r?o{?ܖ]ՂptN.'v2~5wwrƫ9ݫvt5!"q6V &i)@?|O(̸$YΡ&3QLP YAC{?"[>:r;;RCLa-|( , ԥH2ø(Ag5Y$ۖ;IIK J6(uu{*pP=v:s ಳom*VLc5PM\,ڿqkl+ dKw#՚w1OYc l]?.fs0y䦋6mk}4b[ ~m{Ə߼:ȴÝWZnpn-˜ߩnȍtGwΏtkOyM<6ǴRm' Va:Kۉ,m P;V񯙉U)VG.VsB# ARKngjwMv}NW18j$HcLȼU*fDSnBRQ;O"Qk'RIz_+lVeP.7&yL@ tnԾ_r{ou4Up=/s-+gyy5BdaALj<*w[#\`:b8bP(p$8ѽd^sRJ!{x£t:x76+8AI޿<ʷ[FRiIGzOO3I9O +POoWKvtnqKJ.B/{ K! mq/:X~,3ׯ)Vn~_vTYpW.osk3ގV.1lϟˣǓ(M| ޺Зt>F$ {;}Wgb_ Hb/b4!^żbAZ;1*)^ y0#pZZW0E29,*Kmš[c!H]2-՚1'\Wr X9Z[~եږabk^i,6UF@Hk*gTpFE+gvϰ(B톰m'Ukx@P<HE ʉ&$TIg: dGCY/SD95j!i2\YS22:z@ 5J'Ee4PSPUq, ,bteU AT\]W2WLߏzJO~rଊdYck+q;[o׺eŹEM^DsAZ^#`yk T[kD+9kzQ*3TCZV- 5tp%mVUF:k9ҕ"BLU-th9i"d3+-+pl?2Zxe0Ԫ3\9DG׺ hyRΑpKz?z>K٪,wkt+zx=YP'2e?p:kp4rȂ҂(륄K".rCO7 i,Y9{;nk~ 3?K9=;K6#KgмNu Ǣ}sLEѿT :])~&> 7SuBg>*O^:y ʿ?뫿mo|̿T;NHuA 0"بS).H^hiPʆ9jAurS_=B66\2\BWUF P\ʀ .'-t|BЎΐOZDWn ]eJ2Zh<]\Ks+!Et筡 WFDUFiDGWgHWR"BFU -th6vQrҕRVGpeEWl*m*\b;:J*& ֢=tvk:]!JٮΒr6V=2\e05:e 4.~h]e-]e@BWVQt(pX4.AK/9C@86oBµ 'k e)W8`/̪{ <#痊/tB$/$WF]=czE;[nƋb:zs_f%EР="J;%4$Rp]!,lHEhSL JXfxO`[) #--"@ᤃ"1ɜhMwv/;o5Bp Rfe& Z=[o?'{jMnCCӝ&WO5h-+lZDWB 42Jn::Cb|?2`[CWHWS=2J;:CFIZDW2pj ]e62J]!] Ip5UF+o(uGWHW`m2g JBW-7MRvs+J"ʀj ]e5vD+ImWe ttuJֲ= WUF t(J·՝tUNZNWʼnJ%"Yn=ބw@ ?p>l{}@<0 Ҙ y-v[&P.SNt@2k;R|c@ኌ:Jz? T-۵ *Ʉ Wg+K֘p9N+%'J՞ăE*ڞIp{?TMq\Aq*V #< a"\A< TJr\;TrIp,&8ڕ7 6j2:K\EDzL+v\A\IpI~\A0sW*7MjŮWrm'oz\I{߭d^P)2݄膿N.<;Mv-H;MV}Ԓ fwF@rÛ_?OʺFkqQ]-X,1ZXS81KdB\{F{0F 㻛R/}K_>ϯy>7WWWx߿w.zg޼G@_{?mMq71?oH_8҅o/JM"n)p(exzy 7 =o. S:͋ۧ(YjYʕӾݺL-Xօ*i]ozpz{fY3Hv"\A2 T2[6JU m:C\Y'< 8ŅrtwJrZ;7\!{p׏+nbP8 T]Apuʫ[iprmW6~JUsUqq&wT.YpjWmK爫8 8ypryv\Js'7R;'8XR2<~\J7\!$1WYxR~bPտ3*e+WI [셣gd_PzAtBZf1k>hQcI4@kmw Y-nXq"F |Z6ao9ry|C!E#zi\wԆmZrm{hAG[ء'O+L+d\;y}?T޶ W+kL`-,Rzw*%l:C\92df• >ɵ͂+U{s qʓ3< &rW*7YpjqȆ3UH'SP]kǕ W+=LD`?TULWPm:C\!􌓉p)dȠrWֻJUF q%6<t0TOZeRl:C\%giJypdȠj];TeYJ8=xDγpNm]@_kO@tnwd o8}2O%rsNLL_َ"77 >8 Y Mvjݕ 9*GO+fwr4RnJ1WLRL+ijJv\J֮WM0q"\`vJγԮjSZ;2W)zD`o)UnWVVMTm:G\`z{ ]F^ºK}t% ڝ\ f43Rwe E|cl[cBv K "wilI_$mWr?iZ>:2ie7*=1Ec&• yprSWPl֎+c6\!,LpSWk)̂+UyJ7\#5H2 T[> qsOVɟ5hG5n_^_]iAuʑѴ*˻^;OW O7hڏu{ѿ}r#P;WC5ߺd7л!_]]!@Zvwи|u B.w]/q׷v}+ɉ K 7/lJ/M|z?&wQf{p?mͳ BH~v60Uc/^\QE  n9082N5?37G o37h>#=}o=ӛ[7P񺾻_ oWن!5;gNQ\ˁ-KљҩH5%SǏDX_Vw,~O}.~H7Tgw. 뷭\1Flw۳))g(FT7P IB{חV8ْ'δTЅs&7f Ki6gÅ`b};CM:42rg?.4pFrRc[P7aBV!ZXj;"Ԋ"zo%WZ`$E3ZAn4ʣr %rZt_͛R1BXyCd7;Bd I>N>wcFJ-l[@,}KB8hƄq1ZCY}5]Ҩ%gq}oD4!}vyn@1ڥb҈luCq:DmcR 1i7$p)c1m Q91TF18ED!@!nL|UM6’+Z:j$SAP2S#OcLA ^3fjm3\3bF*C·zs?XJrLDaws=!v2|B͒1Ǒ֑-~F_[2dTr kbR C$@mMs/="$U_ XsmHL60-|HyZx8 Uf dDy8nف GDk`]GY24,REv4 %֑]Z@eF,) WXc 1z.-`fRo0Gy*.|kE; 'a p>[CXbӮċ 8tyqTql5KCb1w4PDž =xGiDucoGCV;TFUn=(FR:nFdF)FbGT d{{C\G (&c'p Z%3#\-"<{ o|2} ȆeBGkiJOb+HPA4\Ӵh75VPf$b :zSJ.@,VT %Ƃɇ7 Nݝ^,KLӊ1QէΦt"B!&$̿25aFÄw&qQ~~\E~Zo×k{W0\l3uWwm$P 3 fG.\:>$tdU,.°t56$;6*l`VPL |Xl`_bQBEAo%JHDNkV2{GhKzQH gFaYswv<o Ƒ;5Nv#[QW=!V128&[uYAb; ~XM0߲ߞn+|}wgN'k79J2r Vl%V !i,#z,%q.)n98j"!"."3aGq# 574(Xt, mM3*Z]\ƨwl A!`zb I [=$֫iCF2zn hG5!)("(JD]cw|X,E03-{2Bi5eXT{PZ;ʛ`j,}E,*+Ga"cA8P'Yfcr| pk\FA_KBzԞEwv40F, jVM#J}k~Pзޚ^I"؍ba-G;ja qݡ? đf C4AtȱWu!} Wi߿~2AokQ.nar$gjO޵ucۿ"|?igZ`  |X,Zr&.=X(NS#ErŽI.lt1 =Rf먵.h֐D]k sѦ

^LqE`1\ƒ yrٙ):lgt+C0\ة 5-J1<@Jȣj2jcݣ5: @2!s mX)+gɈ1)<72;R+9AG=DpۈH>9.J]3)%LTH#L5%i\4Z`5c?ks+u5ԅ7AK9pGVs0F D PXP˙H-tbm@ n4"qc8`3G @ɌS:p2quJMѓJnEK*EnV;Am6WVuӥWp0زK1at̂f%zGCH"8DԅKzP@-Q\cy]j`F Az3q1CGVI;ō|dlmx2o v˗[7ڶ0ZִF (m< ndV棈ϓEH`7tud8XnNa<;]tvdC3e2ͯO}o6zbVe Ngst6_̆\#h>eOV8٬4϶{W.GIS{pnfw-Ÿ.0]]'A-;E -ΒbCﯪ@{`5*֛c!*ZqŊRiR=F%w8@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJoW W7.g?J 7J ؕ@1 ǨRu#%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R*\։)ao@}QDvJ QH @sI DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@zJ }Ri͜nQGJH @ֹ銔@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJoG zukO^NVZn{}\P/PTJ-j2BU$\rGK{9p h>zPZ.= G]L>,c`oU/tUZvtUQ:q1Ṳx7tm ]U;]UHW3e )}+9骢ҕJ>]U?tU\;]U]=Bf;՟5/tcR]=FBk{DWl\ \-BWNWf&!]9]7tULW+JOtKmVXU w:VUE8c+_2{2ۡ׭ s6UE`ӫC7NGڤoӊm4Ҡ;Z0<شFIojs"띑_~+NNs{YEƚUۨ|uz%a_ו?O=[O|e بMCnzqhsSOb{Ec"g5CHn.\JJC:C6]_ 5gsCnh,bj N/<ۓ7^6^LXhYe4 B4AE?~z9e}kjX~Vvct|u^~6a~.w]gK높ޅ(C jA0\Q 7 sS۵]|;˲Ggt@ F8[MHS4^ 7leDqX<*_X89Ap = iory<M?&jlna[k)o pEM+}yG8Ym -~;ͷ@о$еÝu.#n#Dsz?!tI˜y峽K\cvl?^=^(ŖX(gJK#>GZ| y25<ԋꚘmrYs!7&nkH8 {by@sx[c_%3-sZ3d[Rid^6d%D]A'AOf#^UM+bm4HY|,.zƼV9Q+$?Jo ?r1MCLb|@c!&.??Q[_Et;^$;U7^\PZ9Ի6x[/Z1fWֆ8"易EY QjI&!=<4ySF F:jcZCS𶺲x@dfA,8)L=QDX4)V;\JΘX$ ^~䐱{Omo>gKwqhY!Z#֢1kc2 IFoCH [dqܑ }+$;jm J Zˈ ,zH2{QJZq j$ʓ{ܫ )[[(1$uZڔۄM'[RwIv]9 qlNJQ*Ɠ^eE#]3qiAn|,Ȕ$Dv`,]D|ηڙwhr[Pm*>[΃}4<֩TK!su0֤x,1N^4&7Gfȷ} $^Ն@{,7|I2\,uHѶ6J߸dDn (}P ɡTdz^XY!mt%0 ϮSɎ#hZA'&LqD*pبDdNhky( T drTDbr[@zfϲ:tTz(8wT r"ttG|Pb KR)1*GȆx;˫ijK6Y 'o9 x&f5>"V5EXeuªd[ u^T"G)H?H]=*#n?v}S&eDgשdQ1#͗Jm %iY^:p',Mp6/v _ k; \70crv+BQzCB3QRQȑH3IXNu= `Lʵ˒Vod  Z<'㬟hY?\fbHok6_muvRoe>mg=Mh5ޡQJOf;V/VIi}pgaѓo`9Sl&A䐹ij4Cz5XS&b:a9YU  Ǹd䂝FJ.Iy_ 2wa%(XU|pQVb(̛mb֕֫6p5m^/O{]-LQAܼZߧ!:),~SW϶O(7w`Ѥޜ-VMnM &/W'o֍;.A;ބy盔M^#vFж<2vXGd|j杣sf[[W]uZoYҬ˲b ;=jVvPi㔟AeW??),.@QM\6,~=H3_@G|O?>?½~^?)F%aP8aqAǗ_W/wVz5޷zt.vޅPUO~o̎zHW_?F4׫nţ.lFMdģpUA^}]Ӳ_?FVkQl#n;ͳ+fw/?}M#AG#?{gǍ$2CXE_l`-X9/~U-8iLQXi=l.d!4t!֝mj[Z wLв-*>=ذV^lMBEA:i;ӤT=mg])w ]ݔy,s.\>>CN4uz S 񟕭Ue|.ߞڿn듍dm^hV+mN^Tª-߿Mgsӟ~x>+ْwŷogO[)Ȭ^  uUqK>*K^OJ_ψue\c۪< Ӌϯ =:$A-~\_ >G2F+JhX>ϵFLkPžFP׈`U.W=m?<# 䝣 (B"ݣϽExJK)܎sMy^gmY|fT#l1EݔPilxlǶ+(ڣ13150!o1VÁ ׂjƷ+j^EtE1xH_=9Py΄;_[ztF88fDq+ nS<]v^lPztl|2=:dvZc 7G7&.Mu: IfJhƮ+yY+/7˓ d Ko?iKz"2cF&긜cr҆lxC,33~l\~%7: f\"/:Je7oJ~^WQ5`@c?( =D FƼΉk5>QP*lT}Lq;[3eO\n,gևiڎuqEඣsw7`p1pCDނu*!Tۃ %y0&}BpOJFWM*bZ_WB1j%+>@sJp}2a_:r] %جYtt\HFWBұJysԕ.$F JpM2cWBkOrYWU^Ʈt\c*uJ(1RWAp; GZdw US.V4.^VӪ;\W~ח76|5ϹrVUN Jk?G ܜ.7Ųh75y2j]ˋvLTY\d7}b"euۂVEx\Bb>mͺI&n?] yuˢ J*ukSiwlX9^Q80 Qhڨa(5vUqE vD`sԪGwWWd_2\2Jh}baA|t[4 JC:b\_WLAg]PW $@=׸Tt%.z] epYW3ԕhҕWNEWBKwe]QW} sFJp)] 3J(ͺl@UBƮ7@2cWB]JCsԕ LJcW|2b\_+0G]R(] Jh}ACU@ъ貵Nɞ"st1 kӀʗ:-[ O1]m(O3* ֜..7g}s?|8~k# Qa`U=]:uTBт SPTTCª6XdmTMv׵n "k16yWzhUŭh)jﱤ6)iJ&B䗭'w5+k\T S[ T!]Sy<%w"L?v:gjnxmQZO8zQt܈V=(„tt2\HFWB+r+$u5C]}+6Jp=+5Ǯ+Dʺ4@4 (JFW1] -u%g]PW@3dt%+ؕPjȺ)EW 2S*] 3(xf]DWuJѕ,jKW{$ҕZ=rt5G]qdҕCR ѕd+ѿJf+,N"d+ @J(ɺ)%203 M&AhCCLUλW6~`ڝ%|Ɣۀgq;4K@*,~鏕6&^r.M$8Z?Q7getqqS 銁5a2\RѕІҀͺP2\Bb` JpSѕN8Jf+mRdt%ta^g]DWXlBb(]㔃TtŴTJ YW3t: ՕtTtŴϻJYW3U28vƝ|qS-2r,utE6 ~*{ p1ZҠ$b16b*{،* >Y C'85/ A44`͹\ m6Z|?'+LߓzE.ډ_덢S%MzraDO.S sJpRѕN5?yu5C]sMBGASҕbJf+ǔ+*] k򎣝jMqM˺:7*!] y.]1dt%/JcWsyg"7 0$+5 TyjAOIW !vRѕRAt9x;x)|~Akoggˏm*,=㫣_elG-7ۿe%ԭݧH.G'W.5L5UTkJ]U]UrjuQR`ovvZ\8o^C~>]n[G|Xkەoz}ޜMySv]\> \|K{G[`XU'g%q(U)e˪m,4mU`t A[hBSmו{4uTzF(/oQSs%|h?-磁U{|\.gYzu|X[\vԒI3C~7]WW{x|=񗟏vI{;hOoP^މR5ri/\պJ|I lp%,\|ע}}ûi^ӳŷog=8fٟ5%밈>_F}) Cp9^t4*JA-:l3/|>b pWc/;nM缻 im(cSr;?.?e^<2ܹϜ*_X @ 믌}_ =%B 2^Alo- nsk{kq+9AL,o@kH)q v+7c%H`7q8\iuIk^;rV qcsEZt%ѕZJEWBMಮf+TtWc*Zu%.GWstA+eѕMEWBuJcf+OH`Jh)+tu5C]*1!]t_1] ċ3(Ng]PW9Dꊁ-dt%:v%2]RW0A/鵣, AMFÂ\*fZKat&kxPH)AM獧dFu& uw{H獧d^!mBْwYWUa>o/Z7U0[z\D6b+4ɮs*{HQ;OE*`\;JaqA%] -u%ݵf+kk8v RѕТ]WBi jrХ4҉7 J(M&tdm2\L"Z>A(uNdCO)]1pt+M͠кǮ2j ^ŲE۷JQ lm$"qXKH IcX92kT6֨֍G qU.!] pp Ҙu%Vg]PW赧tAdt%So;LWͺ:_?i0] >Ⱥ:ƘttKKWK6] mP)-` uE 銂Vѕd:Bk eG%+"LFWӂJ(e]PW*@B`ΛA :]1-A달*(oMHW LZ'+%HEWLkbוP\ n13FQd bܗmLvFרft|QX(u cr c8}8`?u808(ZQzrzDONSkknXEŧdwGU~HޭT%'$/ RIJP8 @!JlRg-ylLr2ѕp=եTW/0\QbrәpjҙpWNJpԊ+'pk(JrJp(ǘ+N2C 6v&\qk?pU $gW/1\  W,xg•Wu&\+'%5}zJ2AUdwJN\m@ZNuÕ%+tvДXw'\+'+'%;K+ 0r$NZzv}E+v_5 _&Oo.=:oBr=Xh7;'3$"3+^-( Wӆ~be.e4^+'o"{A(+U{r>`zl27[h*`2L(BM(Yt7ZdbIg6>}]l߾PJtMFSWnSdK6r;l2Ym J湛$נx[1HE po ?bWM*#% ܶ&EnX<}xS\:|^P)…~A+(!4!`90MzKѰ`\qC.gq4C)li$<# T59!y>Wr2ۊXt CG @2\E} h>-2Ngߖpyk&MD4_![!sxᒂj{x I>WT&ܥ.̔5þJL?˗勚L.ߧ?;04'&pԀSau#G0.tU\ODbBi*$OS=)+Jr[_⯗mDPdƟ~+A Uxeo]m keٛK ߼|QllN&ÉĐEeĦ\IFx±JdưnT[*ghNKFEeEΘr,Ibl`X aSG҄ |ܷ]=~E'bbW[fpD?}Ev=/pn4)%(Wq}}{=PqwJN-ٖ8Iq7<7K`3q#)!cS"[7,8imNƮ!qM0y?N-;[Ghc8֤3Rn^J/LOD.Lap֡ˠ,=<ȴpar7ȊKq30Do3xhΠ{Xs+4N>~\z8+̸^\V3r 5Lx^&=IP=",Af^W e](+G[(T*O50 uNh-U&iOߒܙ JE3Xz IZ T=9g2˲hg5fk =f:*\m~w~^> UFsK#PeM` _b96=/2g D(?G}b@NgX',, )UJ(&SHBTt}T,R*L%YMR)CJS{{/lqWh6 -ZAorh&%>L4X1'JPn׃>}:/h.ujn̺mE7?2uLLp\F\H`J5&ME"Z2HǔBw !xo86b=ZpMf=ְk*-<3簇wdcÍb%<gR"yr4S{3E?ݡ}XQ$Ir xΞ""0Hô ])+0֙ _{["IŲ@)+|Bޕ;tP&m3YvX2m P )sh*%9g5K_!Тt2s20iN) Ծ-JǙ1h S}XsrRiR~*$p.*V`G8m&6A RHFkywZ@9u&}WGIm Gj* v}up;ЃWqwoP4 qA*.}a'X iU䗯4ZVcs$M8τ10=KF'.uWNKxHȴiUD]u/ bBXh|2ڨM T+e5 PH|2Q:/( $ePqIZhד!M29y 4 52=o||P jfk!-@*,H9>4uH'ށ|qꞈ)\ܻݏÈR0q4]ta u NA;$DIN7upU4PΛwWi`dCRt\y! xhMeSϮj}&˂tDL\$ 6$7A*I&k7X6Blrr$ǼA^JY^[Q <8"_YZྟBJXf2q: jQғRI:Ģ 6V핵9SՅ(L `.;LgR̵"ys p .i$8[ m{f.fμE(!rjS}E]g$=19+'y5i֑iD6T,@" fň^PK"N\`/\U#Nb R+; *30GT|oŶK~PCo0n ޏւb^۔YHTԩ^=ƟôP Taİꍈbɚ.G<[Z-Db1kZ*6B %zm덝)hګ{Jif?״@yhGRk506Ctgf}| Q1̛,@+)T7MJj0_M5[ Zu,>G773LHM me6U(J`fFV4+VSJFk'v-'{ =cRsSpApE3m:)\GB55 qTpkXWvR43,j.Tmԥ$'* ai5ٗ %IM3U]!פiVag)c9|tX[PϑGt<1kZ矹 ъΊA4R݆.RW*|{wwW6i;I$[d au XB Eid" FSff+sc>Vd%4XasT Si,mp(PB(YP)vBB]f^өV~iG_sGM)i24\KPp )ІӨ)<}*`g1=`[v9U<=>>nJ1`\VdD)k2Pڿ*W;~kBcKX# (JR5\8Y$HXσ#H͛r?QG (i#"Z L+Y*m&"KxN<"A;poeM $,C:b1bI#dD):'|Ѫyw'6t'2 %"#E \A E3bE4!7Xn1㖢 3>1Z`k7!1/ىh; n>?{q&Rq:nԭ%#DQV3Q1SFhcgD}vqX x(%TAN[s;իN\6ֲ(j2_i =In}^؞[;{ub5ǭ D&>t@]g2δ: o^w|qVːd>\zȯs`騻7r`^ nͺqbyn'c?MvͦCn_cj2M`kV+,w>S Q3,ks r)O.yv?~|EqxH(O PYݴ5%c57mb5:Ҳጮ O$ZLk74:u>3#mQ'p 1B{Vd̗M2Ǔ΃pF}@곯t|F)B]gZac]g_;l|INkmh&9؀=G߷YO;a<5 %PbEd1/' }z{N_'4J@6Lτɒa,feo͂4;ǒ= Pzj閬A/X.~|Sw?)k+q0^_cࡴŇK2xN8ȋwiTXu}B[S%1K z.̵/T`l H4fp}QP!]MxI]_~g|9]hw${NZ \)-v3AT g,Lc87y&m*c5 2 ':y1=IYxr6T0^$i%#y]7C;y/3f8vAMAY2(m鍉%~?Үʹ'Ot|3hw4{hQfx1ʭxMA/+ 'G3R+-!Dޖt2F0GXmN*YY6{JLlODeaj2\O#z괘C 0^<̍ LjYu2׷7Ei L2 fF1 0/W3yϡl>& F| j/5J籣O{ai/Y?/vŌSMPS"cH_Qs)ۤM

$0'ػ"NPXJiFWDB%n?J>hISuׅ4̵aj~9seDw*s^taJJ6FfGl)@a^0# d/$ inwoj  vJ&GYl@(|AJJ<}3*!nbj2*(v}Φhѐ {l-gydb!!6( 1r-1@GJ>G,(i[LWΕmS W?ʈwL CcJ)VZQ8Hʀm9Z1,(yR?lBO)c QbhD"+geC,[e3 dٲ3O fLJɔ<=8%ܺ%Y5U a"It̟4=o1$ pEdƍ+I~xOx663C$îoK=0ț* 6# ($ k+g>y+ ȱc0Q>L8-3 }) BOfFV(S9s>x懀4` QLa/E2k({',@Œ"d$13m鯱R(VvL hʅ|/FMBǧ Y;lx~+%8Y?.%Nx?llsi1vD~ 8_SoJp2$H QPԿ?Șa RohP}pqv&|L=I !D&9f4g%ՒhcI*epqopkn8qw~DԕrЯݥI'մeluU]!c!1eH6P J3B!`prlyq&5c#SY3 jr:v<ax%D٥9N.jEU$间БcJۻkYwvڔB.. -.A;a[Jq2SJ?@'Avqn-ݮb!=EpR3ng;QIAꋂ)eV6a+SLx"8'VXn8D(\Y wÇi fsdpY܀E}lhWEw$.P-3)LeezŭuvO.-ZA3>%2(Q:76uj[C|5HԉD׵1Fc#j1N1N(ˆ11WҪ$%i@Tm۞;*)dѢD}⫓YQW넻2e$2DQM!&Jb8i93h3f-YEvITE[v;ˬMx.]YkqOY$Wq%,u݊Ĥc^дEV}KI`~E&VmDco[J"R <73n@0$=RƬEuVNEEWtkqWsMwMG5j"ϱ}x7-~&?gm34 }GuvYW[_<z 83MfA&iiyT#9淆3;x^[*LD*9{ʹٮS gt$dN੾/ !^֛O#OɌ[ðEVጛ }fVcyoN(9SK4ʣܜEE6TBGYة׸tMuJgzo2M`k&*,wz>#X z5voNI&$7pP'WR G 4`n@9W6F[k@WTZ MJG 6u[+(OͦA-#A}i3QӨK%$呶jk8Ô^OD_+vfdȰh|XL~:dL/[-baT7 hfVpV#]ѐD-u6K.k r`"m0Ͽe02O jJOӺUVeI;Bv*y>d}P'BQA 埳Y.Y: ID*N쐨/<Lܡ$ wF"hHm+~u `%bU" Bl[}ߥDԃwueGɩmWÜCRv;-wKR^H}5 5uCH ӯߠ&փP0@PlgAFXxisFLAAUĶ~]˙5guG.~1Ik:_㷟"ɾ~u 2 i9ÖN^~7_jepf E)[y_?= 0fTbB_&ȐkF[mbbwߢCYW:3o%V/GXD37ŷĤV_uu1eJJ ΂7 +n7ժnR``.*e >K$f*yZd|aeEh0Ϧ²wܥUZK 1"da%mlq|0~q\*V.x" 1s\V]:OX1(O lO !I +wBKۻ^ G\/hp$JT(COӔZ /aJ*D3h-1vrcY+2T ry1qjNM(jJNO : +SpA^KˆYCl8**)"y5P/17A Ɖtv4,B9;k NSH`ImʪpQ.,ʼB0s#ȳJ1ұ2d򤌥D, 3/G՗!?+ZFb!+SA })K<}ɾ  "f3IC*mJ3v@I JsTt٘UeWYob||1˟Ipl* CZDP8&X#w..Sݾ ABpwK(q/ $ͅʍ|f&!׉ Byx Ic R7~Y]8L~-DRZ2-|22@HL$r1J^e.v4 ՘O)p)_]bz&u3#,8vLrg!ߋt")>^nW"WAY1W=-e͒xbd4N=f`T85^L`DžrmJJ] ?^8p^ECEWi%f&07J 7$ҩ3 KgFuiwn_{$S$ d ^s(63#5-Uxr|NBϞB=wp>|XKט! Ю (Ia *Щ  X9g&Y[+xzp#c#w=m4QDn/*XU0[xYܥYYdZYRb\jn2]bqMb?N?rV|_APd|[oC~V)Ah(Ɋ㝽,O4RT)V>ȯoh=6B" ܦ/g \G?荭$$i=L.?"XB?T*؅~ꄇ &QWUe. r${1(:q\t~^63D"xFvFpE"#N\@HUWʯ '%/ 7qDFg@zx(x(pJd;N<2)}QXŜLʬ⤙F ,ur%*QiP~Ld&`@wS$%\OU]Fn]> .DV3.73CPFTXPJd|t: ȁBʱ:5T`6}>hU(Jd|0n*@Mߙ'G  :L+YQT%2:u6L䖗Ui~^pxSg;Ko{ ?~ ý9.al頟(> |_ ^Xu^Xu>\ : ;$ M ;f(aR`*)|Q o:od[1\t ׸3ξO{aRϾu=?_fs٭ }n{}W[[b"r[ Hq*h7yl|l&)xӟևV&6|H:B`Gx_p (ƌ67kA1/YD'M+)idh(ڮ\`~F'&00@LZ8 XϤs UD}}hVuusiH8-_S%ۣ+Vf_Ĥ&BT y9Of~T+&DFM<VRcQW5rm)7Z (Ex\cA:J7OG}o¸<)hu +`/oxh34Xן|U_BD~U$`Lԍ(}gwufl_AT@Q0rR4c,[Jќ"V_]P>tʇCHK9)R'Y&FXF(PY АRL$:_WPH]oC GT 5d5!Ԅ'Oo,^I3)%HP9CgT\%Q:1ܸ)b]`W}/uP^dke4{Nx% A4,+6+?Eh)x獌)#JDԃL|.tBaXfD" 1&`3JKVm\t2dFCA6X/v9d(5r>5Xô&jZ8 "NEDd9*^$AIU%&G{t*ymq1>7U&Cbت \#E #%ܜD(NF'Ze&En@_=1 A,v`lj ËD1x_[&<2*^\x]^t.4*XigS[r,XC|NuE UWWͫͷ\? U~zXFn5$~ `KL9aCYbt%+? \6xT`'I1U|z8se\)ca9ô;|>m_cB7mlbCj0~8@?SXE:$_(j$TZy̑Sw8!VlM(c4A mdP kdl)%Q%]eN.BUSnrdJdt=IZ:2*yp1ܺRNfv_rM*fZ> EіV?DZׅQ`tad"ҝkfM„ h-:ihYQN]N}VgdBKA#2yV4?(_wwYxA H% 0!̹<:?ga({ ,o 7 "}%3}_|E?z_{ه 4ZCll&rK`ogֳc<)zMaۙS~yp[?޾>=zYo_Nkppj O^uI/1s ms~^MAF^F")tKA)>~p]gRz$ M V %;6JeL܏\qC X59a߽*D( 18mЂ]X"c^R#٭a#b)S'"NknB>݄tNch0ӥ@GeIESb \fo#4IG'f:EUz6YLD$`B J(@"uM;;!Gb u* %{>!@}G ;>;DPQK!MGLUTȆ08VVai /c´9M֟,xynZIC`Tg1lȾ`Cf\KÉ@xx9(1Df{x[̏N5c-3togV^ IBZnD;'/6v4Je6s"O$YIdZ ŴM3g ӂ 7nIcCIC1wm|uփ[&]fÑѦg[JHRn3Fi4ؑ㉜!'R=HpHB. ;CKO(Vb.UND&5|, DG<5NxF -$KUJ9N+ P? 9-^?51fN˫qf!Ә:,;* 85"Kw .ڼe ˏBj熂R7Xa.P n',#`F+ WBj>bD#6cm`ořU!s8D5S2ɑ8yG 2V{x\sDE9uEZ *Fv-0Vg><ϭX_[/IBiX6~;Sdݰ?1rޖ*N*`2̻|1"\UOCXe_m/Dži{Bd:GSX1hЗgيaVh0y7Kc*_=xSwק,Eb@P"#|fA{:\eW[ ׃6MCb9$sX=%,I5`ה"cBRf«䇕68TY:b oO[n# C&M-2b .ߞe)&Z`+] | <~R@E TޑӔ&E˹\ )pj( \GT,Cz|@T>`VZx Gc@pX*3C 'oƋeocNa幘hN>K:Z"sZWFZ(1>ͦyC1FR+?/|pKBhL@.Tz£>>iB߈kV޴r#3tuTIijلiB) rIrmDF1/N |"ױ-v惺1*q)HLQ䎝jVC"6Á]RU3lFM(.Y\ѦR`xpaQE> ()묢J |Eʭn yURP:El9c̖tkظ'bYPTDF z:F #%H\II}L")IDZЖc3|( M afL^xZOӓvȈ iER؊&0gCH* ;>DvG­cL#!qH?Z0ʆ٧kLF@/|vOƬө]{U$':- H}ܛ~ݧZ\7,gqWM> DL0 =?IJ ]w.fAɑwH_1b|_ ̇ n Qb^$Wr^m%T$ډ Kb3\|+pZ+ڟbJS1eY⒦Z(âmI}}Hfپ=4q3-. /]f%j%M*t}xZZfYHlu$ @{`FPl`?ySnguE!=@$0 "#9f)$H|jo[kٷO""`4Jlrԓ@e$J2`;0 /A ?KȊ*mb([RHlzLQ#w&mS1 5`yF ?%&A !$]\'T} de]nUz+Dòw>оtE) Pꡱ -h,[CW4D g4Jȇ"V3k9Au c[jڵ)L;!ֱ!`Sc 4MOKMgӺN0'؎t^q=9m AwMqU0Lbh!mHQmTm. t"@rtq͇ϖxZ@S6V+R\..0!D[E#X,:"I}!Uxwť X8pw@Q$0Ch@d$LyYZNKGh<;+޳xA#: ֚qS6*!߇jmQMbԆk~'e7(y)8բ7$`J#xh^U*'lN2N;Աb5׌|Nyx Y,n\6{v fjȉj. R.AC:{q1py2_) '6:dT.nKJ"+ |Qb%Uran CdHeRZ+-э-/]N$㻊/\H Iiq%*C~t )ĕ^[{v\ J[dUOgMmdP‡RY,u,kH\~ƟqtMCKHdwV>h,CஜG5O.sYU=o9:LK8/\"Vfz%-$9U Y,yW\-TC} cUJ[4]/AzB v<»PcΠ->]tV~-kZ-;KKDԏ@R? i]Zth e@O5nݛz۷T>*/5~#_A˵_öAҸk+,:j|Hl\Wϖ߫_XXna%ە 8?r&\O 4.%ΰg:AFbd362E ZG{jzPöT)ipI0OD0L NHA/|[66:"J+̬*-V_a3T OSUr1§;?YV$S p4Fo/? Q2T|}(E$?}ϧϣ]}NsOi0~э^sw_®?A53e%Y|6-xVlYٻqqe~r<*Vߍ&^wtt2-i4۷+HǟvzL# q·_֦ftƦW= 8 QWV=s0O>O;%jӷ[ RݜԊ[+PYLv2!&&왧@R$,WAِra`"0 l\gnRxpͿ^a&q!Ƥ)MW.%Y3 Ժnx[W4+D[ ^z_ٓvB~,S3NAǹoo6 տ#? {wׯCdgpJ;(ѻppz0XzYLLyrЊeMEIE}TJ2N~GPٔgauߖ[-zn@T,M:A U~SQ,)LKa"]VXPIeUA"XMN&jQe6-AΪ+mѼw][@5Ϸ֙@m:{BKMg,:?W-PsUCsKreQJ )TVZ`q݃wE;˭ħ1c)Mkt YYh' 40.HVrtBnչq6_3y]8isp&'z$l' ^ߊ(7HM5I@I02 c[8ZB9yNƋz.`}C93*,H١k \]R(龣hSB)dM *MI!],`@S my\ktTi Ys(9ϐcQr٢#> 1B=rH| aft-D1/gg:Z1bg[eͼ(-7@g7oif /YvL/1%XC)7BKy+RB kQeeV0?W6?,oUȟQ%,I"5-Jt("K8H<@48 fy7-߰:v[1⾮{*UpfQ j,\KʵlU&V)r-<@:po,6iW 9W&\sN'@PDZ@ `fGuQ{43eAP9 4؁p8-b@--uN/(^ur{nQK((xx޻gQ+;]COj|j$݅K[Y좭D>nrL jjqMzpX|oi "}[&4h #!,)Zӱj_E1.b{x !k r':E&7M2ƻ*;FoqQDiLBvZFJᮘwᥳ%pg`e)Y]ɄeP1yոGtfFKe)YD/ٻ]6`&ebP-c_R* Hivm J[HraBs^zQxwQRa>HciS`~Ӓ_u޸+θz+~GKk?o_>|=Rj BvRJn? {e9cacB/˸)Ȩrac.Nfs3dq^rm0oB7Fj;BЏJO? R+x!P2U!@68Ԧ\z$NC1c'V]`MTEKZȡi̔ԻaA {G8!qf:#lK :'lӵ{!Zm)sa凮buZmv2x2GNV)眤7$iljۤIB M, x$ -`*A?|!Yi!~fajƼ,H+[e*Z'خWzR5{ӚmXJZR !LݞT}Lj=fh^Ͻ7߲Eƌ.cuXFMVg|x;2A|y7:npbntzO 9<rW'[ho4x4;ٯWow}C ˻w^?ݏYg̾斧 \ầN^u M8В)3ںIO^)v4W<Mxc;nU {Oow}3*mWW߃GRZt~8J͒Yu~}yN^{vXbC3eD;DQȷ˝u@b_E;{sKBz\{\ Xc}؋G)vtG׾F);onڸ*|߽yrZ[:p'sܸDci EF}BV i8x̩SuhoS7nQ oq9(53է(-:w CZq67:ګ̫te-@WȐ0SRwTtbl;gI42Cn߭@ .+4V5MXGuv,BdFkGz@- |9ANʒ&!F?]qH)hwRxۨAՁsQJu2BQņLh1+;x*uv5 :F0Fɷ @Xf;f` ϖۿ<[9vrr0Ǔ/٘7+Ljņ $$k0 fݔ9e> }(3^F݇vZ?Y6`#]^*[fM#N>Mïϙ|(=}x釓^湳<=xa:zvZ><۾h޵r?>nvk=S~_9 }=[#0_򌊝ј/~4t'~9ȃoN>a·ۍٌAHx'/|3/]O4]Oyz ;,9, t ݩ 4[J"Wl$f6T<ZťuA8+1eۂh{CSn1N 2ىb_f-2G/(j(Z{Ѫ(bc6!yF9]F16.wq6&B9%c=yE܉nRs ݴ:4/>}t.0g||㶯䆝ZWӖ+Fz&(MĀERMw놗-+ 3PRDaMN OVMG!K?Tu߭bQ謕g4m3f/\}tޕWǩpcwꧼ6ɺ^K>W]-$u񠬪__zk?K=}׋+%gOc_!)hY@?؍-g|3c~XtrotsLn]x R@~:tpv`IF7p򣤣>LTjE|n 6iԺZ0Mlh6p-5R *+s}JsJ\p鶰<1:Ө1:f:̬1˱`y!yͯs BUc'+cQLOzghB_uL Bƿ~-,-,m\rwXbP7z a37fKQo_zX.`b$U&Tj_',*dIc8Q}zHTMj%Ħ2)NRUre-ȣXs[ڄ/}qW&w]\]rm1W.GAgz`m-cąc< /bv7\nE,543 مuj[Բ#cH4&&'Fs%n+Mj0$2tvL{ou]jkt W^\lLpúND4.ᶡv rvݵqPf* yLoKqh0,gZmrm⒙.v՘FIC\|ʑ@gd RΐN9FNPL_cS,6&53BJW+8s3!uf̫}4oiV*M9FMdcJovA6mTCϩXVX^b<\op|kOBK pD]@ۥdQ68  g;:fǗo26ʄpݦj(@kI.yE+."Puۭ'!;,wMh9d4g׭Wf!5:? KoeX;zm_rj߻VfY#NxcfW+;MJƊb|>5l ڲ=Rjpy M|Jǀft\ 8Zlht_ȝu>KJ=|=`fz ogR(qM0F~ ] !|O Z7l ѳ4c@-:@P"!G,Fj9 nпx?t5s g?ɻm<{OZbG2@ivpLjdjY\k>kQPts5j*1 2 DzN=mKg?M9ډ3ߣne$yԑNs8˝fLGD *qgVZd00h^P6r.Ή$3C1h}#M`[\]r%]Jp{/(pN-I&o <.Ā&jn@MP.)zz"m9TEΊ` Me`]6 @hִ+̨G$;tjjvVZjd'2;Y](`4`r3ͨ_+ȶbd$ML\w fb-f(.)"XsbH f ' &S5ö­sQE:7@Fc7j |4Х[dƶA7@7eiZ zçM{}"顲: , tSre .Mm` tw%S)"@?r\ʙR5{xTlePm|{f׎F".ƺrT\jI*dr( ]H$GߥZȂ"ЁWh:ٹڀSSEB:j$ę;Q̿Mvsrݐgs@É'5 rF6 czJ=ew94|;T!FsZo-7c߂ݢ}Ї9Zzӵ1P{w#nȳ9BGUÌ&h86`S{A=3Gh9}Eȯ{QM~o QR8k6aG+aeԶnY0d\5VYLsVhC;[F@O6o\wa'I+[aӧ~Ai>#&*p_#M=_/o7{\\7}珟֖껷Wmox~ׁ_`qEd[9M*'+_E<\윫oߞ=-?'0@4U?t\RfZ g웠ܖ@_ 6cgq\B] >ŀB+J-Gt~v9J V~e)h`^ [ſXP"x0nG<4 UT3)$Diwb؊B~ָT350~ q׭aHhf1yq'a1h}z@=q i($FAQ5rχL>his=,e0vijqQv7£> ĸHe;gmqKjҝk!xH6C~ ь2#6_Q05(w)ȌHݱMfp?U7k|}fWś&'ifo80J57{IC@UyHL*$ Gٷٗ=CS[̷rz -n<3wIfb܍=zf r!(?4A5џvak([m^?e4p}nJ-ec3-u5P S9٣Y4)yR /E){[3 7I95Jy]$`sw}6c;|lubPOE9#g۩ )2yO`N1SciJvzUZh!P*䍇]Gݣ`>*i=zj[f۩j^xm-hv9=~O޶Y;j;Yz#{ouey9?Γ_۫][hC:HidR%e xҌ`H&O˨J.Kl xLԸLk.]T)TZ] C XL2yRh*G @'a,َfۥ(RsxɐΘV ^jN(X"htN+7fmd r?v+<3SKԗԶw9}ښ@kS=jWzH @}X_I*F&j8-LPH3!Q5{͠pݐgݱ4\)Ei\95w/t =4>ޟ\ P7o󗉢o. .fleTrqec9Gx bo۠VË>#ȡQ{s siVd ܹ]νlvm3#F涙F`>9P/g3 ,ٺl<=9U ~ӯٶIR O7n$.M2VBq{-QwPh%绋!p5W$lf9x3r+݋}k.f3Gu:>V}%X s{8<.`{bY80lwvJ8LAVOjQjQNlING`wCG]ױPvq$ >B6&94L n?Ezu7֓'Q!M!]F\1r J kj$717Pb*c.RK<Mpm. y6>B~J4|^3dV= . F]SX k ޜџSU-]tcpk˚2go~9b cӛ\n2"&qpJAÄX0q 9 o..h O#]x :+H?L#U~݋z 2)"> D3 ȒY%KHQĦ1VS eI]+wBPOo05<f32:9vZtrjƽ IC?Bk)DAxI:T  5@SxMlM3$ XJuSA'i@-z[bٖ8 hϹʹr2kF$/T,}זP-$qT8cMM%p)RO{NS&ZId(N`vRq4(2us" G98RQrKĈ.H\)_3%Ekxp A <[ƫ3eu7\ Ưu&K z<~F!NjĢ:l S&T ^Mnh)W kI8oD"]S̀BׂiwtE'F}0ʇ8 H1({EsM.|{kF'»lդ4rfbM̃h͈bf؏eeg[y;WuTkknc5SLxH `qjΩ-^87+_"eܒ}Tt q%?U,e'|Wj% 7OSu \37}J.7F0\kbb:.2njLԞˮt(w۽pS:Jό̸$n! o/|ƨsI^Fr/eY z,h [Hވ}#oڑ0hji)@=e-nc &mLI&ɐ:[8b$T,U"W(=~{J!eV%̕b{\bQq;h5@*t&rZsvn6r~7w %Yz_}"k"MSqPM X%hz t7X}uj)ŖDr1;(ECM'*D.f9U,~n,zfpX.o-{R@)Z4վS,'Ҽ%3g_/A\dtGqXK-BObqMS&D${@SCu>O~h '*-M}h_V߶dATV5QHQ@ JVb 'o#sN@E f],h&"ޕ1M3e˧PI?|Ft~I p G/erT.1PժddETUv!v0jl%u/N6ib lJqsVbIU}sFd'ydlkh5gm dbFe9 % P%$4RN}痑"d/vS9xUz'PU|JupsR)̥d/=E"Itsՠ SH8M.fyl)\ kQ@']%HΑ994hCy48֓7=.3֓=`ș+^+bpH%7ĎlC ؁ G +>H:0(oԬCD:\5d!4k#19%ōڵF+7/x˯pl#=s^yXфްMEn) Cl[؜+4ټLXLCբ_urL{mNJI,F36 ~ 8^=B[hEҩq1`*3|? yœE{944Y(Ѕ\т0穀,# %4a!w5 )޶9 O'z{Goh6r>ӻDƌTb*x:r>mhvK./ d'e$O*ͻ_Oxzmv|@ ^NDnC6#[*.@m;g%ØNAPyz0)H $g WpQKb uBvnНF 1V-h9؄zOxBx{Uvf 9r}zU@iA=3ffrͶ¯Sɇ+`! ZeC!+ %*QhZF`Zl_ XFKF'&xPP]JZ[`6k6gi$]"m=آKsÖe,H!`dʵpyb ڞ*T1.ć:Hn77CTֹ;Wz5շ. Aw;q]6] 9؅xZcqXӡ<@#DAvrcav<1#Xt^8:ْtonǦe;owR RbQ 1yfRI%nF ,5jg` :[ Ź<'jQI=˻7$޸,K6q>UFFl<\ lK(+3M5ZAFOVYpA9KKX昍06ږ*5SM)|r?\Rɮtt8G–;i1R])A7CMR>0gqoLdkwİN}Z]ՠǞ(1RJ ݾ`+[ ?aUƜI9ˉ ңgf3(NjHdqv-=-Ja=?fJܾ uf;11V*(e8jhYZ70ʈa2J ]U,>Y_jf`K#hk =A|B9c,˹O i12iLƁ0q3l&J0&L`0ؽ3J3B1@ߕRԷR{eaD_l: xE ZJl|*8N:$,ރ^:ۓC  )H MBjX_E;;9xLۏ5opdku֣r$\O0ktҧT]I:W=9Ԡ&0ESFPEA'6fWwRO{~vM~3l? )38< ru0##_nIlOǘPٚ L|1l+ `GEMЙǞI:|D)~k~0ɦJ vWg(M >Zbl؜M2U@8S(3uXgd]Bg FS_3's&g"0Rwj۬ hFRwj9o,5pO.{~2o9m_tU(&D{"Ǥ\D_' Z`;h@\P&!@hU TzU?,`9U`>xMI8aϔt>g0?tw'vFҾfgcgtnIPnv#G {ý.t X\޲}1&8dղO6{9y]G-a@W)8m6OX)nM7j*p)̦|J3B0|zz˹4l8D6_tlDto yف芪@AUEd4cZ#,ez%7ODEHq>K"O\Ӗ{|}iLQ|hr6&4SgajN=@ Gُqg ?H˷%ߦ9*mNx@||2z@a$^!,؈_7+nLhQ)FKdv"eE TZPeXBs?{Ie߁Ykfa cSȃӵo['.|.K=+7>q/ 4JexAKwi(6} ؘ!gǷi$z/;A>S,:h-^w&_{݇oz'f٠38B[xq:x{pV#{P@rnR"9KR b6i2848׭ GJPokf(<:N/1G9.Gd4z#vѤ6U=st`OϱU FUMA֬&V͢Ml }t-@=ƿ”7_OIўjA]?\<) Vkk: ?^^^ݷlbY%V91=صҔսWOɽ[+A\%/~R 1X<;fDDԎM{- '=Lp}i_HُE* o`Ki§*;9IEֻGE-\w%v]"w=i>ӵi^\۟0,qz}+ԬY+:1}28įeSb Q4itgM0ny Yf얙 fݝṕfX7g*t/vΰ'ud"ݢ N[>@_rYl9h5s_>h7b@.#?jұ^?A'*xT"Dui 0kMF=ko9EЗL|U A[`vL`DHv7~ŖՒ,E RKdUQF-sFL>Dm}4"u(A;Xkg['o^#VGV2_j47Bqֈjeog/^ Ӡӡ\f4L|t.5dRg'{,ƄDmb=4kJ"牼_\^J?z]'LOe.RoY-AO>My|g0(gaPqPp^jWr' Xa5Dјz&y'~PHb1?50S#OsyI4:灎(T|,<:l#?`!:~ƣ]gSC$| ?IC]5-X @]F), /1('\EzV㾷^s4kZ{e:yB"Zi >h$DJ:Kq'I~, *pK~N$ WX},W+ s5Lc{Zr^%NwJK%(tSF|T<ڶi65GiͽW m -` 9EbwN+D-y^d]( ™jVJ m9^ Qfyݾ;Vb~YfuōD;C J]}2`)Hah: &ϏA0:3zW2 < ,:D987 m7HPۍ? r<'s% i */P!1cWdve7B!o/vJiVl6gy9Yc^la͙5Y=}\ *]cp*R",H:\Bkz N%5ح2KB%z͒FY+N cKAǓɠ@Vqww?8o^Ho*.Ϯߌ R+ś w~/O+e*3[P" Wԕ~?LO?멦BW^+GWRVR+HIaH{ +=QPU2SV3%4v #OR1M/P*Ɨgq6h, (dT*+6Jh+u+ aAKZzY#7 ]2*)`RUDK3:!ppHydV}@ȭ6OF)Xg)iFFTZb!./mੜrH$\I*kKᚸ77i8#$}i";܉qA 9 rH =;Ѳ+4f_(Jm!pPIF9  $H3ڕ(+o#?xdR@EJ#"g@q ' [ .jRnSX%'6ulhkIljPޖzusCK&*)N?&d,$:ɥWɰ; Fd_Pj2JX |:?+OlTx靸x訤~c&V^:QǜD_jͱ iCݾ.ɾc,by֐gl6oyw9vZ6^T|Y{wN(jI&nB;Z11:I j0hrF{YcYBA/oGԟiDiQQs?]^WG2X0(7RǕt%KHSko,`oi7kڱFcHpNF:݇gvw`} =>z9IXu6zVY.DͰei͏Hm& [NΆշaXuoZ."V36L͸_1nrMFӿ+ZM;9 ?7$ pK4ΧkwWި kl6PC=&NNY|nޯ~/叿dݥ+Sם5㞐F;xnty}GGG}6A֐2\f^-sp4C_Kq=F1*=)FRyh(2BX=F9(Y HKm } ~tZ:ڏekϢ=x0R؍ 9FMLӅٹ6K밲ܚ~m[0 kL6z`΄n6x ijwl _)$)ZgFYłV(-0D)me$pI[T LYr2W}+|3 WٖDk׽ˤ@Vi(@ :N3ޔs˥;n-HgdKJoJ"JKb+nId '|J eƧa Q2pjbI8p4w"H΍&o)HUנŌ!;IcR !ICf+@*҂ hKc*?4zNdel}$.yZn^Uf/2J% Nc7tyIG@k"rȂ%EmOwArRf]T` +F-(*[9ͧ cίF7Eiq͵՗1okx$Q1tΈ*нm,52S <+Mdre^$۹V@9t5d E:=# , ^:m}7 [2ZZ3R1ǭD.&N+I+p$ȚC<tOn=R[K]3y!H#% [= >6-HiEJ.RZtL9eY S(WzgyBUQHmrGQE]9$Mdn[+_h0* F?"!0VAy!5d:SOߴ[7í*JԾ/tWe؆7wŋ.I*u*.zD5R*RK0›I5RxVDZк.mVwʏ+b2F+/{x<\P;||S|pMip4->pҍ+%/I5@ITtmj.%Nͻ^FO9W'qI}穠.9?Υ!)R |u}v6:cOn߹OӈWZM) r8̗#(Ԓ5f l(.e:y ЅGz?'58<&LvHWP̋B Pe~ẹ\?K%C 9[fhF !evtl%dY!f6 PaM5̫%gN 3Gy~JZǟx~ 2w9#fP*>[ nh՜CZ96GtoAK\}!.-ϒ{;ֶ7^4yC? d&_F5ܨcVF=iw A!x[>'aH€]@y\m{HwqC2@ȵd #i&A %7hr|ZقæE7ț"tLs'l {֔ZdPV7qwa} "5-쾕p)ɐw>zŦL /UT F}̱*9V ( c]@ NwBOLs޷R5A6[$n˧.<8a(6`zsș͚<ąe[Lz[LaSثT':ЍdbRJ&л- >b@u㻿 QRkᭂ^[Њm nkGdr55Q˗ ۄߎM?{ƭ*rnR;eoRϱvri$MX?m2h`! ڑ׭VKÔ/-y.5R=Vzhb&^hڙ1~B ߪײFzDg^r `z-p&]jD%NaJR#EYwrr'E8E@e)6`fqh j*$!J*yD;\OLlW6aW7£C[u"Zi,jzy@Mq>0F9\R(j+'\u0%8ь<.'ƕVZ(!a谖DiO;JWaPT&xX8$R~;Ś(cxq}cfJwE1gKwQ(rc >o,Tar0WN|?!\kZ@>={ ݄j,#! -o$J|[;-jI'oBqg4? ; ["l_ц#7=yă(Rtr`W9?uyԭH+vG`oƘqE08=Q8m30$F$ Rvgyqu5[Wz:_?L09[R;pŢxn0GpQ.99j9vw`|FOja+Ke %yN"3 8흟c 5v]{1z^p??B5̎;mCP|`R׀iY>#_Uoq % %L:Ir.,7.m(T戴[)Np>5 -/{9nXp>= HV^}FM9x&L)˒ hJb}@cs@R|!Pz+,`|0<Vݤ}l1jc^:dy9uWi(@ 2C]Ƨxa`(#氛8{iKtWV`$78W4/{vzVpo59P!w%ZF2?+5)eV=BR Zu.Ep x5I&p:[HmcyX?e? ?J4eFhL{ Tf2<%8`U |Na\`aA2>^tԼd20vk6C Tj\|b}vf}\.v{"R*p3B^Q)mјK,Z8!,Chx%B Ŏsdh.l-\ "xߵ1\ w i,Cgc BLif &'7:>{J*qíiee j:r"K5-賨PU}&0r3QUAT9R+v^qM(aa:kFqQޫ65lOLveLIɄ#NL:AU).`g9af(RqƼ$Ҥ0ceA:&T"ob\-x I[$̇^Xeiwb>]4OC˾{=UF [}\|<Ç^=_s0,Gu0Xǜ*ʐ&iP,Q ښn@ֲƼy{ H͙w;Dh).w$@SX|?g:DnPy^J?md)\.is xV3NFfn=LWO INS*:KtJ93&[-6!+-beD[rɼVeTN +A{ͫg?DbIsκ qmpPE e*TZ!͜pL8QI<TibkF$ H ESBYKPw7>:EY`YeLyD-M%01#g^j"m,x/B:RɈhdFTr [fR`e2 G.%@G`ʀ^73)YHSquң1;Q{ N-1PB`+.m5*yf^ aSB*%͈@ 4 *(3o؁(¦r2?MH)4YVQ˖VwB#W+.lX*cQ^^a& 0#x S㜶,\{,3ai "KZg:Eq(RR6Ŷbۚڼ֘h{Ǒ:*P$`2 /wyN{ȧh|/N{btN=|e{GM w(qʐI"]448}KڪT`\jơ* ̼ k[Aڛѧ櫎$x΄^_-Tst#wqv9ybp?\:;~ Zͳ:j{+V[m-6v箩os)6V:8VmEˆy1ͳz ̭Z(<ۘwASJ'b8o}iy$]>b:W-@ 7#Yأag8r)wx?GDb j o@2wb*OEg+m#pysS`H0pj#B*,FRf 22X8] 2u'J]ͷ.² /MYA]? Btv9Zy5!U}c)) 3C=qH[D"JCLRk&9_ W`[EӾ \V'7CJ+a)Ps6uwNDR*(]uͳ0Fr]sUrOb"w5W/HG:GTܡVBrxU=?h>BH-) VXuQs:_.R7q`n ,jcS:đ}I*f˽]X:/plV6Ĺk-g{UeKCZmҢF%j Y?+nc2/mf ' ō\-Y}YEd.*D*EjUi(2[j5=|(g$!~6ibd6EReZY8%q¤&DE1̲dŋh繞 L|1]lNjz~[758)fDsTu!b "b*ؖ4,ke8 7%e*%Rk/^ kpֿKF` .i,҉U0*7 'ܶ+j``=Uj&,wF{ Ldy80gW#b$i0II*nEƢ%c^1 0RfԪ#HRDW˽P)o+l+EU&ͅU݂^qlzI{Ռo=k1եRژj1nZ9 5;Mazd"ö{' B y2&#zj~PYtdӎS_Űs–^S\^ck., Ϭ~M *EQ.@gYgԷQrq^r㫷 u CrѿAgu[s\4{/:iړ)h%̓ӲIT ҭQg܏%M,nacE|ݏaSJN0 0bf;+5rY:1&_8< {OڒSDI5'A~$)QЮU߱iA%hMV2g)L"OS0XC%)-(G\%c:fuL[ pKǡF1sK;4=V(~n^]T8{O<,=n=h5x{̸yfTIGn(h5nRbV|G#L.薤FJM0J>ZH%:h\8i19s!*B6D,64:W5ͨQi$I,z0ZF\2^=J@ΥsNn$\sMqAJI3 y3*DJ SdyNH-^RÝ!Ff9°r?!,=Y,}ԛKZ1BN9Hb( _?1I9"D}mNsA==x3q5&0 ;yj.!G?ϻ>Dm}ed-.B+E!u!90 2v.TnL-V 2D(߆j@E@eR؟xm}@B@r{{ydt r?σ,oOM{&k>0=hY>Gy;zv9>Ͽ_?ۛ?FIh2$he<';AMVBKCXpU/E^|͟^GV\\yOf05;|- 37hA~=|2_aʧ2 + R0_k}n#G K_OWCMrM8ZtV4<"ZzR~ǫ?|huJ/k~<<+1dGqu_j|vf+`iAW]keϺ7:56Rj C&"3^f [MiqI|. kXGnƐauѫk._ϯhn؛l{smoN0~ߢ>)]7'=_rx%23VCn|_xHx7R"b(QqSƊ0$1Yp1":&sc 5;ο1! yANQr_rXv:J.'CGgeyŶ[1g+=K$_P 1xRn$`F+?pRjÛS@)t_*Thf}jq;T1lzl3昡@3R {[9i:@ڮg+)ds#]~,\,xsҔOpy ɹ?N̖0p,Ir, BomocX=z 7ןR蕆D#)m$M=V6ȠdT' ZpX-)U","#9_>\rQ=<,>Z05' " '}gI]g[X͑ncљe\ϱ!A:w,mGO\v6P!P7[UAq&O)fV;\O6.1 jgyx\c"-?*x ! ȁĎacŽLn0z3_>loT.k U.sFowQ0N$>K$2~@w* &M,sUN_N_ȁ,^~xg7)4ܣ]2ojtbh }gF2=:eB|J*.ع "[ŧ fu+c$V]d1($QE?#,hEϸHg4j2=_=ߜ_5Vý%68"@&XSJDZT$(EDF1oy6˽K/\n]щ\-8>dv'Kk߹{;/mw/Y7j<Ԓ^ ,M& Dp5/CIZCD!;{fVV53x##)rBG`AI KȀNR'+,(.IQ|7 Q!z*5#8Re!jq~Ip./sr͛7b7`S` $ȅ}``.4zd@K |bj.}OZڹ\7z\Bk6P979@ɲ@T0 @I e.AFRED,X˃jlDWzN38fQ1T}Db( (." m]ܷFǜ} sT⒎Y&*`JDlxp6:e6DZ)$ B1Z I(HфrTfpq F΢5@|!Cֶd'9D9/]"=IL!bd gl|_VRi呦V PNۡ46@cmBB?9$"3[ I[\3Fj%1H&`$B+fd53Q+@g51tV9Z!TV14<& ћmfԎG@EMᄐ,Z2(z%3{R<:(z󞙵5RD`jY t: pxۆ[IkjMIT š`JxJ;~.\ꁽB5&u r#0i,==úg`Bu>k)2޴' jU9 b ,R̩?ieh'(CH@ f}Pa%ɵads$Q fHTe8r'x-sVxSEO ZB:yp1mRs[pUȯiD:&)&;p8ޫ?{ t{}sWOa79E 9JO̜Ēؖ8֎WvWE]tq.(&>×R{rJ$V Α>fE/QoSvw׻%ix'or!)FD(t N7BlyWrp:>*2wPDE+kUB<<`nMS djfE%v:72P bӓKGyRP=E*Y$.+gѵ:ၶ}XX8u % ‘So]O]ݼo¹}߾ο}Kmk>SA#SdH>8kdmI /M}0Vbk&&vgFag^P)a(Y;߬H4.EY]yUVfVn)m0aQs"0쎠tehR neB<.wDg|]]Bs7^ @b^s%3,lS ڑ2/@`)()!agΩ4؍*с,@@ I^g$k $y|쩕JJ =X%Ѹa>%oC6j'70 Ӻwq u7Wp\'xl'?5jzKS3gC;/(x^Qi% © ^VVY[Ȥ Lx6=ш=}kG`a>|ΪTϙ7?s( a Z!R {I%=~U@O@SiL9Kđ]|p9btWlD~)Ʈ:j5[)-"oJgXKV !Gĕ0+kE<:fcd Kbp[#15lym0H'kG Β .@1* F$p4#DF@ # k*#%1q3WÀ Ā![tY5EOKͷ[y|N#ViBi|DDvV#2{|)TQ:ݼL 9JNLÎ)x.h`Rc|J [^gS;ZI~L)E߄1gD+%"*w T(!&;A{>\pms-(aG3t >ݐi)ӿ&5;GD@K%%T؊b{c0gwNY-HJtAM Ts] d`رDDFlOzۏ7'vwJIzPL]B f헛wiI(o0.9cACpNۼ|盛W?8Y~' x9i$ge"$ߚwM# zF}:Y? <*,emQZ?刺;oB4bJ%<!B|  OlZP~ 5G8.70]}5)0jQJF4H (>&;vPSN ,u('ِTW@|ϋ2#Ts *z T@}E+9yҽPPTebgPyqD zUe؈?'Nu -`Q-nKv[]HhOSɢݾpO15/!1jʨE/bEy:퓢bbI$ϟ4J X ;v hH FGlDTZ7u@y@~ JV+g|4=PY󫓲S+1I"T" J_ ܯ| c*N/цoe4zJT %Dk!.(A#rSbsHh,wZ kA*NdmјK,,|-!,"F4a{s E;ԒNO@5m^X^~}{LXͮo87ݣǦpa2ys6s)请?Y>Ln<$5^F ~*AԶvG׫?(]cr^oô(6NԞ"w}=ur\zca`' y"Z"STOM,e:v+A蔮F:H^,HȒgݑg5ƟD&jF.2/5'Wڱƃ׊ d{g$,)E"%qzes>'Ԏp2p6⒓"ݕvv$b\F__a$b8.*?wj\tgߥ!#]͠Dc8 3i*jy%L8cfK&7KwRP)TD:)kFMI,qnECȷ^'L*Ƽ;]{hʸʂ3J6rnX Rp>AZujM9@7 A,`ՑSϘ-c20x2!t\a$Q@gBY( n9$ D:\5^ѮfӃAW)knz`3)VS@2@ec %0f]*H&d7ˬ.` {/i `l(X]LS۬ W2<4~!*9N,='yî8y ^oM{Cm.ֹCats>u>#HyP!$;ejV\̱ј} 7w(9S7cp}*|v.:ȩtǀ% 1FkvIBhJԈ6CDk668ZFE1Lx1 f޽L<`vIHa%-5kxŤT-E@ 0d*fB4'N8D7 8z'y9ׂAgZb = \1=$C7מޥ?p{or]2cJr7mif0$qRELGD^+[*2K7v<}~lYtʸw32$,F őD1#T>*./$sZb0i ސ!0lKg;ڻ2H)a;eBP^a1v!ȍ9"ʸT#;nj +~޾/L;9oބ?ɳLıJGcJ9vM/eR5Fdw0aV'G5kg(&b&g)(E )izrV`khi?Uj:;Oz=}?E7#Br՝([>@ IhD!9G I8D =YFy ,'a Aaщa2eD2 H1F"KIpI_KD4RZC*0 }PvJȂBRKz'A!` ayI4CX^8 UK %ס? s%h/x/+ޯtZ7\IDa f3lPikif߾+7 *p[g\k|_q~oFzS$JmI¼'?u6QH>U=DeUi.<)!{r3^ڧ藛ګ"yd> QU_~zP;!.1e$p8)LCt+b(\Ǡ(Mp0xv5>u^%,9ߢŕ ,cPQ4Jp$- Jљ>禜~C1(mα"M _jx E1H#cF`"R7vC K-4ty~Ơ 5ܺ?gx]R@$&;Aw\Ŋg' ,_R"gMS ?YwtrGoZS*/ê;(zbK6 /|ñK.ka{Ѽ+`^L(:S/|bh4ػK eJt $J%D+H,x\]38ak2;lbkJI ˋvf΋[u(s%0q,0 /\hW$0H/F\GN ĆK[ecd^2KzՄmx_a.v̚vdZ~(.Xi8IC-M/fu7w<;7m AT`>GBYTݎ=?GOi=LG9F܂@/DQ&Ĺj1s'!Vf܉T֫S^q#~ͷ__<\(i?7{sr͏Zo"2˅I(Ņ&J9 #)"* C& Wu$%g䀯`+doQ ä@z}SVoxZ|,KJ @1cFeA 491Ȣ4N9BT"%1KJ4%g_Urcs؅$xPJMB22H3LD*S6QlB&1qfVT{/2\Aӌ&U4K@ i@12j@ 4! +sؓy_ڈOEAy}QcUa!.k3UNæ$};w}kRt3ߐol5X6_8"7KhW3O?Ljoۧl:!&*ԟu|b<о`JDR?$oЬfɓ?8sƆwR%B BK wsN >ӈ $nEPPk۱T0^/`KʺUHfF 6gO&脼Gh$ne&Pb)WaXḿZoB<naQ+0]=^=sЊwuPLKŽ/8u,vLW>X .\ %=d*` U.V_2YMū;po$jץ1(rS +@GcaƑD v&"|[r,raVSˢNܝ Xrsm]ZCVнd3iԾF>,8w(N}7Ӻ߽(E*ƉN$֏&%6}-YiPX.d:O#rWH~_<5p5 v݊ؤRZYj$GpFHlR1Dl9 ,5`cTkmFM̸NZSBS-2k(S&eB2pSLre ƤB39zAxRCXذ'U̍5df:+IZLb-Mtv,yĦgo|ۛjao ^/Ov]~+:Rv#{W`T_m;AO"[l`\WJz*Q*TkPV~R EGTDXWH^/\DǷ6m@+Z;gѵv&U/\D}d!:zDipSƠ|y":}nCz޴7-|"zLi"HΩ?/{鰁}\柢ts jEnqb>FrI3J :tYY:o>H%'~x&H{eMW_N6]vv0W/ N@Hh5a<+m)EDu.fKmUuy13%-UրI~Lğ e=SvȦ8;ۏ>ϏHgSDmS2-` yTLs6xy?LR -G9Wi,e"*|*D4#&)C㈦12*86$g6e)FSeb 2QTƔrTJ4268aT_MN0d"dW?e"b vTKI5l=J"tk)t79ކxIŒR]Mկhi(_=Ԕ{!+u~pk~hCpZ8ǘYN)e<*\'rKTkoj)q<*V7;fj&T!UYmݕx7v5b 8ɵ{6Vo(SsQ}Gv;GZ욳@:=;Nz C_rHQǡ];֜g ꩬxtî1s^<}<[>t5f 4lL۔nag_>b&KQ*LlUzEkP.z]h/N*Ce~. zRNI_cFAvyv~)\LSdmg6;.;btO=NkTQ |CKBXelʮ!,Yպw`,YD5P6z1=wZK6_2%yS\=7%PHS Hn ]{KF$ _ #Fg dz⤳S?)j.|G ǝGg)fNyxsfN%&Kn AyK2Dr:c"s9ȬR:U2PQ&ihJ-IDL4d-'P&g,5)*W}EVDn'#Lj!%Jv4 CR%1|F$J& LKn~"nh&jjSv2u:C ks0S:-X^[8 t\\;br0D-V<χ[ԂC.B*'N^rG_pF혊3Δq88ŦwXJΌL3Xu°T@8c‘Vbp63_y`$25׬( O< vP5zb||/ ̞2Ɂ\Y;)SC`jǽI[ݭN@^OeIHhV-$An6D!_Hz&r+]is/HIh@# ʆ% 7[`է:2ddHuڕ 6 V~ I>2] ۠Y VQ}bTlUP߾+wߪq%rv \ Bs`)U4P%@5/m+*ڹ1eM9lRdL@t%$'I_eeR"khtO|| TϤVl54+s68gJuQS N'9 UH5C[# ѧ% * 7"=3L'џZW@ZYaryR XS$'8e+P"`J~q g?(5rCG!BΆg51y*N5sTr~R@+9-6gРdV+k槆H2+Y#`o!y2$IędB8ƊXF(p{_Pj2'(wAӗC9;BDЌ&;Yj@!R D,LBP !)ӥ<n:p.wl0s[m.I=y}2G&cxenv/ۺJkB}y[˽~so]w=ϴ;Vw=wGpBB}Kشqii+?vrx;Ã/%ꞹ_zA,o~~pkӑ"X_=pE8~&nw =pzo%| >l]_=W3#5_ 3_ߏz?=csS^Bv>θKE׵/SduOɩmy9I4\O[>D\QHFskw"?yJV@%G$*HY| }n}XlLj hK A$ @ 椈`!jU 3٭P Đd)9CoDԑf.ddfwW1@" aF%FBLT1s@PDh'\b!*ʉq٧2ᦥ1cL8rDF4b7s!+OGgW=_,XxX{d]]1\m,͙ UT`m\ݿFX(~Oj [T8/V͞M>LcK3|KHVHxYr4{?^?{5/ ⲷ߇W5wWV 36aW4ѧ@eW4+/Tw6+/jAuゼO +X<-"Rc+O,QS[xpl\xSbIwACnQtF`%+MBbeD$N4TDԌD'+w@"`aXA;Gf\a 7m NNeqk! 1,'!#$n+sr1}VYrVb*T@D RL g LhetLP32HRnLjۣv&Q{&Id|s[_b6ս@ײ{,~ g\~dz+_ſUORX477;O_r:|hE7R2#{gT_T~Lg,[5K 7i077҉[=TYJR D@h%ʘBڭ0j(<6Yb^?זISW[iZbJ$hʃ<3XLi`ֲ۟Zo\D˔5Lߎ'䉴x>sq˫l;C$ڔ3Z]ޞ\xh#S/IMXho5lSY>M4k#k8֦JziAc-N7;nʹ{3W]vQ G; gBSYuAVzj%4UF8qscl䂝 W v W .>`3ɤ0sBleh:dmxfP0CS h.=|;=qyu(FRHECs e5^97rF@ ;)%G b//ܖ`X0v>JPM9$ š鼏`ͥ8|uݎ@=A?)a۟n3fBni`N`m:,M`a`s Ou3W,Ph9qAaY7d|K E\{d<=S|AUJ7Mߧ{F.aᓲSx}qQͮFj+@;yxF vBbГ`hԍRsv*nlMfJrpuUl)sU.Za%jk\Hˍo0VTg]8p@1X*%*& ]hfWNg4(f1!S̈qaDltEX4P"Syj=h sg6J ӟ)N"bHNf .Xvn E(g%v!b(A,JS& /?[!c3 Nj5Kǐ2# #*cbT9P˘ C-!O):/:wNA-J%UXcKbV̜ղE*'B N4 ̔`CNh/|g.YR4 TUF S%FiQ En[j7DfP^<mj'V*[@ͭ Ѽ]*:=lguNImP"$ ׆ε52suK"B;2cxΌ021%EgB6yE 3=2ٱ"A-1TBbcO=BUyۇ|ˤ{{TRz0jMbEcOh q>69vTFB68gR [UvVj&C@xsVɚʎҨ&۲{E4h_7L|1UgcMj|j C{%}Fulu<=&vwn 8|P\<#[IkU_Z QM`t.~v { ɔ S 2tX"tqΩ E+w_΅-c0b#t|l CdbJ‡ҧ?8?J)Ao*Y?O®:N}vb#kw=;]_r{XL烫+_fsnwpW| Gx5vB5*mWy>ٹ̓Qo=, H~7z7~nCx~yr/B gp I4L\c^P=/A#G~˟n~f9kؙkñ>nph<%$^wz__l;/_ K?ȼr`0Z_Sοl=Wi5w[|asuk:]sZ~BGs'[DGK 8BNeш388ф ĉ (8|5  TpSIj&Z,idNnoR. RZfr7us{RjQG"lIl3$wUWow-wGYn1vwo'w~tͯ泅m ~u+g +町zxw@)ij4"\&RMI&2  Fa3lRc6*Ҹt-ȩ8, B@Ͱ<ɟrve쾘%I?ߡ|6ԕkEzۊF2=w&o,2YI\nZ6çDՏ3ޣ +"Έ~ }yuDnBB ȟ4*{1Vm?).2$4p NE:}V[Gokp:0bqV@SjGh)GgZH aj2p 8#8 ,S R+r NM 2Pݔu+-+f@Z Ht*j:E7Jeg_P44fB%QfnZum7?OiK]ż w޼7J%? ȫJʑlL❷_1)',`d˴ߣ7Zmr⫿ϝ)*%ua44w?>:Yߏ;5xy;(3ɰ5; {1-a?5 LaEIx~bKCԠ%[Wkc:yaޒ<67SL3}4$168jǪ]ĖU8&4U`"gu1RP%MDK$MN‚ EnyM`'JYZ?y]-wLGNF+kHӔjL`@K J;*JqQqjK(1X:?(""d瞞rp! ٪U-@G_c\yu/_@P]IV ;g; ?v3w @͕Nao.)gi}sywy[㼹  `7+; caS)Q)T'>|ӊh8XE'7\Qш~>Ig&b2"Wl./V0O^DY8K5'ԊWΉ,Tk9iac`I2Ųls;:[%/Y4𻙭' 65`4}_Ơ~|#mUe~}/WWkkZ$b./}G!ov ZBM,ٛE3+'n!9ןO8;g cC&CҠ ~v` Ƴi`Dj>8;g sM[( N7h `M;v FvCs )A>yT< jP4aon[ÚJu6hS|,S]}-@a*5hP4aono* ck`Dj>8;g^R|${Xp*Y=&H3RZ',FLȀpC\:d \[+|aZM>?o["$*\Y $DXBlX lNie44|Ty2YZWCm-'VAmyXjcOSgm}7%޶\rJ7tlWk(?"{:˻LV(ȺOβa:QDX;oQ϶$xp/J{fU |(+r剂VD^E RQhC;pؼdjNjk?HW?'5y4qkI9`J234\q,S8|EN2A<(Bm:;$OeFgh58O1Z᯵|2p谤kD!31qHj3-OQ^ [MNXXp^X9ِrUG^ai`]/n["fv^_<)zAI^.b+%Y$S;X=8zv|}&U]lF;&}<]8)% E̐eZ C Sv"԰' /+iGߦR!rLF2UNt#0Һ_O !BK B440$oVbegAV LKqwEƣ߽8e|{ϰD2kݛl<~\ov^(+[c&/!l!mCb| H M7{zv5I߽9rCCAX>y6'lk8~򻂀e EwV_fǡ't e1^Ͱ;r"Ov~#G̣[~ȿf;Zc[HTtQ:zUQG5p$ە$k'pe 2j22Lq.-{<-?1+b r6qmQ)#I w*K˧Ew|LG9Mo T1/Kykc6J0!t%ZƔX[п;T`[u2& 3 unv1UOZ Vi8$pmsoV[`^5pocͭEJ%:E Ễ_\+~ZE;5(i5W;Jr%X]}iIT #nTdF-n&FRUsq93Ylߧ?ƠTnNk Z% (ZͲ6q0f,C^/lwDFy]p<]Oj";G-`h,=LKЕJӞJJuEU87h+akwK"@ }e#$?Ϳ#ٰ℣H6d|PSqoKbDjτ3gvJ&f*9.""{ч {;Q޷h+2DAg#ś\">}R"}f1'ѧ%J~ы%SSDՕyNN?zuƀ1:Jjp7bqyjMO?q$H$ctީ^ͦ멝8"(:b|o/?-~luU9u|r]7~+NBAK'R#8G\Kuɢ6pՙs/T*R~XGAI0'I;ÿ@^! r2qϠ~==*8d_Pzx)_ˍo[-7IaNPdz*B,L fM@ǯ)~4NϛC$ShS6ϽqRî-Wd掛 ++\S.?Ral/HwkoǴ]sHR<, my-@WcIh`F˔k^Riiʥ9B0"Sܷ] ty/\1(0"ZcA=4ќ#"ynJbՑɣ`1|γA IgLfA:J(N"85~^I-}Hv.VĬ 2ؖ /_p #{#j !IVL"4(ZTKg_?Ehģ:DŞS>\9Qb(PL9oNu!l GxVkZ.+13aH[MAwT RD1Z?*SkT p?;*RB~e* K1!QrR0DEoHv:L=1:[La.z1ϖc"P5sT2|#T, "U Yw)؏,iZk펍 C]|3YmǪ! jK<̖+? -Wёn'+c&s.:ݏ1s1iU޻{304zsED3Y#CMRknq\q}\HM0L@ϸN-Prv]VXk{ԏpDU G B{)]8:FSaKO,#J BML]%{ӵĞu9JksHw|w_#Dt:v+O>XmO Xh%r|Z$cQx$9'mnHm*5*}HYTj+T8c^&Av*<"@`@+Ye8kzg˧x1YG7Z {Eu+FR7z@DތF^4^"G_@\2}{EKyz1]gBo;`ZTdI_ckZ`05Ogmot=ᢏ- yEJc.yq{Lb-BDa.gы~nz|Eݙl_o"7U3d  (M}]:@/|JT6!(Q*'*ɓU*?(0q@pN"V*k:S X<)]goDy`hB]'USXk.%j<#L(|\pcDtSOP@i-5Iׁh2-l'ð Q[Gfrde줪t`s"괁@itXuXuE~쬴\PF:sҾ ʠ %PQ=5fEG$~RYb>z " qbey:ZWE}D:>i|-zJ*ýrd91'5nGP='T+ޥeH@yC槓fFe2'vi PZ+^4JEe*O1Rb#>/Cn VU@J*WlIIqSO-U۞NC8֭nw(-K(l/Ɍ}k@槃L̑ױ+5WH؋?D ֌nHݞLSBOScB )шvuVz^bپZYA1<peN<@Qx0ݕ${<8wQ5r} ΔquZ!Ӷ \Avp-)PGu6rP!}6B+AOrkG* taY~vpkZC!`G_,[xfAdkjGºv8L9Jڊj׀\梡NGG6.1Et e0ME YT4kMg'xp7Q==Үzpl窵?'tJfc태ꮒ1n4]obQHVr92S}IҶ6)пYd}u(EZiV0+ۀk4S(:< o$҇$ vϪ=&t' 340l16q"o{nRO<U;5\*O`; ;rN\b5G cbP剧''T];k !vr@F1#&X8ro;L듿̈́Pv6Yi*:[؊a%d|4C:BY亐RpRp4b]I[9 z[3J|,;f_KWuKVgCp-4/Jډ+h"PM;}fUz ۥNGS;oL 88/|,JfsN+F= sҽUPՑXI{[-BׯcG!򥛁˅d$2:~fe ̐ӈm E-볉^^Ԗbh-p|<#܃_\Ҙp~ư-xid/Jp穦]W1:haUǀ"Vl:|kVQiJ}Azwջfp&_,O=%~ ԇ6š `hf6Ex`l({ (N9^̓7rOJa /B36yd3FG?. %Q2,V`o^|/@^a*#1`p/FՌ& Qhu! T(Tyq5/l-mWz ֲ7_ @42Uo6&3;b]y/'?-Ǽ#MOR;„|ߘ{M~jBw[.x{ &<}2Tpknc[&7S8._ٗ.fc]#q.Z,z?_ÇƈKO.JvVhʼOK_.[[GDqd|D +j#d#֗) >=ƃO c+]^ ´%;}k\[e+|/+=׌[g]`"M~ĥqo@ԭ]?C>Eb2sYIϯivyQl-ߚ`v=y{E;Mv2=_fj[zdI .hN~~w_־:Eoi d3V2ϪPʥzyxSTx}<xc^y~RlAWC#:^ޱI Adc0\K@l0;YNqc28ݡPծZLm{%դ-uLf-4FgT R%H۩4M?Τ*B?db]b^'bYv ,d6oUA/իOi; M]G{xpǞg(S:E{ _ e-d/7o -l*`ׅ8"-wc[VǞe5VLoq0kz˫d=N~I3s_Q0 b&48~*5T<IT,=I8KnZIfK ,NVzX-L8=D}xrXd=6V,3w@@y#-P;5 #|c鳈!?A胝a!sxNtĈ':9W&[ HLl(6L(ԉ(IB| jDò[[$%~QLz[Jj-e/XѯOzAƄtrbP`@]1x Zk7QFp"`QJTTB+ H*NԄJCFpS Ԡ(APHQEIhz62 {ˠӷ ,]d%Rh]`;͜>`pB ^664-4:7x>h ~ j[̆ -fƛ,").MI}Z]ܙ`E/%Zl1 Y'ӉL(}bdU UD]vފ cM  Е>BpbףK`5sp@.L WF cueXx}}̪ɸ"Nd)?>lۤ~=yJ4].3DzZ+)h+5@,.mM8 b6#3L<61^~D(ۆ#u|6YAYCo}i'F%1X&>`!# 1+YK0"ĊbFQcLrޟ퓽_S㞎 ,L{}3w+s%j0~07//Ys?y㑑jA;ɭQ0Ha(ԏp^xȓ/y*&1!iޖK{gץf|m`mOUc$!;~RIˊc<3iŞV١||C-ڠ(5y<[;Q)6Z^؊|bjA>h} .fq `u@o(@FP(~0Ks߆gأg[~`KUٷFNLt8oYGnJxUZǪq푗|,X(b8 ZƤ驪(ewSXq-zϪc]K!y۪%V1U{T+ld0;ٗO{"r |=c_^Ƽ"XsJo~@=lo 83h0|{Et K{ ~!t3 DLu1CY(B8|mot=/V.|I|U}$)&R1>6.QL = m] YxY3_kd} bV&T)a]P, b9qSIP P}2`b/e0 EJ8GLP5xl0ru{qDI{ǜ=.e6\̑3Y@LkS(MaW'Xŧ5c\p41rA (O;Yϼ1-`{\Kvtw4XZM_{Y||rqꄅ*OHIeӽBR Z`Mf,HRqu!Vx\ Su;VKJ”X#CUr 5% @fhQVHXQWoN;+`QJ W}PW J1> L\`%.aoӽGI1? Db# V_)Pica jY,=f특IYϑ4Xꈮ,Zg&nayLuϨ62"Rs>cҎQi̳w0*M:ZF]= gC5xź+x F4%T\Y 6C4tj$[ U)Di=a#499B 8 TK@Ȭ`B iOCTr6L!@NO{\,=Y@ ДJ)L%| B gx rm=%p`SA(d OJĀJ)6 kztޜ xv0`°f* z A UA<&|Y' D)Aqp#~Gɬ9bdI*Z߽ZGb 2뾣Y K)]Oxqvl/DXw+wo/D>a೻>5vc1~<8܄}$-P:j摠r6_ dʓZ~;c}-Y4;hp?o\%K}=mߌHDoIC^T~QuSD GuJhb݆L[1֭ y*ZS>bزnQn;ߵvzMvz NwqP.NG<8sz]!I6j퇄GOU+II{_Uuݺ)s9v^L`̽˴}yx  -E{\ivh/dе$l=֛gכ}Ff bsȱ|u{H*6:'1"ĕiT+#bLxdu8T0-5́mؚzZ{n.ɪ?j[i.qNb~Wҁ'KjfL2o)l^WlȾ=^m H.(Jys[:F/JAI d))mh Wزn#!9S:F6 2MrnńZ64䅫hN=}bg)#;-,`f1(bl?3BJ=' WLZ *GVY0n9AaDhP(VH]08L3Is("C\4O woBf !bh:)'$õ3Ir 3Kpʌt< zJ#R B9 m:ښxU2<*A(Q`0TrdZАE:osu6u+Aꔎĺ rL@kVLhukCC^8Q GuJhb݆"`Mެ[1֭ y*ZSbߣRg-hd9@_UF;L+-<U Fs+AZd%a. [5x%%LamJaL*l,c$Z) H,#+aay|5D~o/$x謐Yo6TwT !/\EKtJ 4d˺I\ hJy:c4nz S{nńZ64䅫hNQLFqnY7%ǁZʃ)u2P2ںkАE:TZ6nN5Vyvc U \91Zp*㙖X㈅,1<0rْYbJuHĈ CiX ]N)і+〜t(6# S jZr GD~l8T!1˨RX18A)Azsʛ[e:mh W"7Ͽǭu˺i` hJy:c4nK2f݊ ͵nmh W"87BRQT pFE ), .%42 y*ZSL$(L>\ѺMiZtE~hsZDkgr RpЮ4\?q7f \[Y2unX%W#^0_Wiu'UNmf2B$"7Y%5jviL }>?X-էܜ, ~ iDFiD(;Z\B_O3iH5a:+9Rn{̌P.8Zgߞ0GK8"̕ىrON]D.C3̸_֐}xg_?=׫^ۏri+9pW>=0+x1Edz6+&bby}vaM:\~ߜ^?rͿ]_\q#ۓwQJj ^wc;5(W'X8. H!63檓8A$t,Y=̿Xy8T_y— |%JS⌏j,?y?' 8&"mA,9mJj)) * B附"aՕ"e]G\o2;״ v-]1Ybz'bX܌Kph<۟m`4E`oig`ҪW>Jtc附"II>fbaΟy>Ç0$[5aƄk=kgO/VYb6vBhZ C">|L~BRdVI$Ф0Lr{* eI6xԌ—dw1I,Ad-f#\ S3)IMX<;{%!qr27^~In '62gL LUL.Ex&>o4"2mo^NަB4ykx0x`Y_ڈ?=냷;/N_[ TV.mܟnL+8KpQ& 2!sBoWw9ohxŷk ϻ x7A5dGd2E,,Z䧲 l TpD\HwJ\w?jeoˆPY,Ռ\%9TmEr H`Zr>hA u|jˆ``QB>鋴SǛHu}׀DD_F2HΆ@hԲtI'$r[SPӨo+5ӟGt&K^D"Na7zgSxsroX qa9NoX/p˩5:*P.Y4xqjD,ˆWLT3!z89 K{ҁ'kbm㿬a]&/|oeXN6k² 7v8hg=Nl͆:E:`{JbŊҠUFS9W4I6&t"m.|C;T67X/7SahL#na^q*no#Ý 3$7L8-!\r+,mV;νIRki`# _N/}x X@Qb)^Żm)^17iF$@ Kb SjivW G(oLcj]_T)vT;7 \+XP{'UI}EbcEZDy, -򶸨3@v&xg>/ՖdR'8<lR?Z{*ڕ8jiD4Ʒi gUς`iVJJD=#ke8)0S&0Z` !ؚQi1:NJ UvT:„B)4`=?A.#1̖RN&xjӠac´fڍD Qۚ %{SoSv"cOM8 I>ћ/Ez>%bo ]s"Y 1S뾗/xҘ^&/ؒd^RttL%ej P,&89&{ȔG"mf9ƨB,-)#M-qx*rseʚ8_Ae3`݇"!i6&ֳVH싽:)A dD C:2s}*]$TZZX:BNlEQ;^ ]ՑETїLQ/J JU(ꌏ4%ӜXQ[?[S#&X]R#N$#Vg9fxma\-q0.[1]Q ;ORSx~kDj99Ы;+{E^lyD"k p$Elmi%aPZϼR^9Bp[hmM_hD6}vPNazH.^AJW7`H4h|۲5" ,ھZBp)e-qcu%88NB4!Qh,10`SiSsQ oG(#@jbSFjD9.7u (Sјs5#%)F1s#2nv1S ASkF+a g֛vF#Nm{S7CH8g/G8C<"I$_9klL`H$alTzd Z)47Q A_*,0HS٘R {`SMҁ`(uYkaZqm^"QLy5,j02g0tԊhP8XeH;`qK|' Q&]YA \,oo.R<}H{]ov`"W׭vu\o-qw:'#07{O޿k6I `|ӏ9DDHQ!FO?5/|L]I% VX\j:RJbX)6L|׵W8JG#x g ppt JBzX" ǏƟVrggLAEW?}_ޭ};djOs8'ja5.^-.#`Q<R>]+4Y[L-QTh[HܮC4e НKҺ?\CAJˊvH^b6J303t*8qBh!E^ T+. cs4Xڒ$0vNo|(펑̈Ԉ4K4?MkmXb]z&(=̽x|ð=/J4fZ\^ nCJTC%EN\Q: tB4Dukhw=R#λe*I&Mڮ,]COa4K.ٕqfq|Yxō\,J`vL (MҿVn70kD$^ϣOngap}]wZ=wh5,@hWayjU PP"!(mS.yp%ꇥt똄gp)}\/URZݪEbZ9I1U_ ӯղHQZ3,{TSL&_ /NYu4VGÚO qY'#a+f1*^r䥓\V!i+c{ݶx~ cG֥^O/v=8LQv"ictls[p9࿮Movu|Lsc >w+k\M߱yMzG0Gg>Ufaz~P̭`Mf,.Ks쮴'V_Eʕݒ|*l'zg.W1Om.IWKN7h0- N3}V'aW&қ8f9C>/xxG\a?7_\!Dy[qQ aE4aW3m}_ôeccY@Odv2):{MʪYOq39|X8Ӳ.?;'d;-wsֲx1CXwvętr0 ]/8b{9* lG?3{wtwR:@E&[5♻+7m:GU7Ftզ_Lne#FG胭ԤwJ>a3&YIVމzS:ysي [Ib7z糺+OQj6t}:L7 b8LɑѶowމݍC0Ams#l\ojOAqkƫJZw4}x{0Z%RNKqgt^*Mtl;})*<B:y>!B4F=Na[l? =su C͇NǾBQ{P‶[pTcpz> I)@eb {j^xE!?gft@]4k<INIno׳˟׸n8HY ir>0IU44BUo%Ьt"nn/J59_d ,}VA*4SN[ W!x~Ob`*LwQ^_Ͼy၆C#utg1gPB2x9tV O_r{mӤYd2<>_-.^٢8iGwDT©GX-!Xo~i F$j]0t7#9 [~w$x+9oYf+tj&Pm[~1W~XY V3Ϯ}-ҤEG4b[Az*~3.nԭ~|mCSӖ9Yva#qqk.RW$'?M )1$jjVnE gSe$* vGUE$c,Pt# cSE,ۙ>(5q3[EU4_(Oq0$f.UXbcNe`)1ᰗJ=–dbKFRQ\deڝlkxZ2ͻ ˛ 0+/xo(!縻x~=ƔrFꢰ3Yu &lWOP0ʂ}'t9H-MՃ-).uU2*)E\G& MH D#}/" RV6kKn3N:0@6L]Op܄3UGc%vV;h>V\XD-}9'k PP 8 }NqsQ >:K pW10,""<8o EGQr1Aa(#PCPKQ<+SVt$hn^IAFi U*ц'}lEǮc"YZݟ|v}lSHw.),;)|"}Vܡs#* )]5/9KB[-GpD{e %:fc$CV1+Ur4d$Ja9LLyaZ  Sp*%o~+S{?4sḘPx罟8 cMQъ/| X6$H L הဤ!8(%RcHD(pCP3~{NF#!eKR {lp1j|diA\9=Hm<-mp9` jƣ#&ʊ7kt2r35otpq.%&@RG%gCuZ)ї>76!Bdڶ-Lnj:4X" piPd0[[P)u vE;B]Xc>YLy"1y$4 -S{o\I!*0*i"A#O_&3l9lj^)K=$ςGi=)(:it_B->q_2ʗ/Hr P{Rmf!78]PRQ2i4GF)o4r. rPM1S/R3f2H(f@avɤNEh Ajv'2Vf4d9hl5ղp.^mQ=βvc cŴ1xOn^t16OB-BV>MIHƄqK8q6r"rÅKBQp¥>T+҄sj)c};c1[cLHi/Y&{^^i sܔ40$Yè0I W6yBjAշl1V64B^t#/ID{˶wv޺2 e(A݁>J)輤x 5L]f.3ZNh6/^$817qvnU)ݍVrbdhx֓vt޵57[鿢TBKWaәJmzj.ˤ\5-KrOg+}(ɢdʢDR=CM8߹8*wgbcFhmvFgq6[ލ>i]:'5!NTcyY7%"eۭ w>lʢP).Չܽ={nvW.PnRWbۚ[޿s&7aγ,΂d ?rNt_ |F6՗QNt0Zw>*4&\yhBomziY"fysԉz5QMxNݛoE0lZ EM>D8Н#cC].uimy|AIFg䋷$ܥ9IddL{w)Il@!Wc|CȈ,=ӴqH76+*4Ы&zElz$t Lwj9cZJϛ+nsl3&n?(pJKf:aD&JB.y2!T.m pW^#XkNp%R2Rlۄ+iLƙz|諐xָʶj_ ŰJ8BW EzEzQ*: 9 XJ.PG| Ϧo)LX*r7^[NHN#HKDX^7YOOֹVr˨U. k XQ0hAIR 7Jd"?i>c` 묹M8|d4L#ϮWn%Tkx2"_.nF$X"|u$/FLF~T z?,l,oYda`wBbg&Ж7L6.W1yJ8__j~渖>sv|+~\!M8 q%=r Aiz&iʑdS1A8.ٓZѳ{wvQ(!HA"ՐA-h`TjH)J A ppZ'2`&E aB!r`RKx %(gTT1<٥Qñ'"HWnEGc#g-ob}^"c83Y}=JSVfǿ\{ D!qb|vytx^i Bn/""A$); F?b,_dJB1O\*8$d_W݃  l ' 3 ) ɪ.RV DQ}X{5Uc Mt{J0'Ha [-}JbɓRbI˿J} ðL%"yT 5gGH%wA0+R#LX|jLY;Cl@I*tC5c* uJ!)\)hi Iat(|$Ptal5Eelܚ̅ ab丵r1_LsMeXBw?iU?H7%jT:- $6THNR7'wWWђSJZy|p>9HaT"QF8qC7TYA_ /H #ea_ ژ]%(`.ip&Pn XDR,4e[,u}k @NupB"/%68ΩWDZm ( Ej0dR J <7)`՛Mlߦ{v4XO {/S:H )q #Kik i)y] x02t*3bgbu<MBSE ]#6[D ^g{3"N^_\=*fWPyMpyE/aӓ$H4!C%lPSJ!R愪Rh6n3ލ+ ?A40= x@K0 A%8 #F :Z!Z81ltBiMǷ㭂f?&5O_c? ǯ-l |,_@n$=Jh\KQk.nh~4bDHnL+iYNMGBO Y x͉uT\HH) %6M%MA[TclQZHR5QrѦ(s@u<㪟)˔3/E#˕:b!pH` " opຉ*in vUxjQ _f~k?_@NWWS<> N?/*,orQ?RE!ɫ7tR ?+MvE/}B|Y#Q YW3fbJx- d2¹jA-Hg`vWw9 y)2mk4YLyֲ { +ZX>d_ƹm'fFppF5ͺ 2N#+n᭑ /vLڕ02hUǦ# WR:B_-,/f˴pqr[rg;Z!\3k][o.GNwߝrdi<]/\O@P%ƶ˴'ggYi~"wZ aj!Ƶ{WYXF m}'"ni?W[ybd/=;ыtô'Sсxe?YOw.((i gZPE\3ۻEFe)s3r9ڦ\9VE-Tުx(m+_~YoAj^ M-z%*WA9[ ^$74=7NTUW^"7$ [{OA_"_kmHeOݥ"cq_c38$%YjG!{D Ds{4bɜF>?&Bm_c##1a/R#JE%'TmJRΑ77Α'b:wŖճe he}dPe&>dќ[,zɈ&c[ 2i~W#jQE67KN(A }Vn7%+Bo(Mh݉Gݢ+f }&U}"^#ֻIx}7 ؙ>)pBڇ%;q0O \]x / =8vua)@_N) :-\%v4i߀)%绝;8:vn nX[=N),|mzt@tSd: [>)%3vD)TL,p3tfRO^H(ղ0$|.IH_5_qj?$+`^ =_w*)G5yD`Rz8~RgJx3Q>Q9… RqBׇ?xYđ^)/qXXO /GRhQI1מ/5(^zQRHq;|_( ڍ[9%ܛ@*!ƕ2A >^yg )Kf5‚QJqJiQi(E$#Lq#lNR^e)-'! 5f3jERKZTLR:,8AuiUpr [k1S˵xy iڒ^= vͽ:qp& m =`,_]j8/680s/';$_WMO6_KNïWMi1sFD$NXGBHp0auு1/1Z%I_u':Dfa7~ Ex6Ajuy|vRcI А!c*&g@ģ,?6"RȴM>6iU j@Y:dYR,20M@ǧi|(7gY0sO|11q<(A=/<,~:XntlBK^/U-^c62YjVgJc#e}߿)f1 sgsea1"NE]jR0Q9[_;U4E ymk[*MT'uuiik<Ѧޭ y*S?\wÈnمwKŠꤎw pjڻWpz64䙫hNn{7YkޡwKŠꤎw>kևܵwKhАg):0~{͞ƾ[65XomWmv?5wKאgI:i=u5ޭ~ [*MT'uu궑S=<Ѧޭ y*SR<^M݅wKŠꤎw]pZڻ%OwkCC&TSObDuRXǻ,rJ<Ѧޭ y*Sn=-F[cUUYhUR'ڸЂ\CLf>+/Q; > rڭ\WyC'c ^{p|ׄ+W}ᐓa?ԥΧD7gㅔA1O|QEJ 1Szx!N? HO'9"MUJ^83)lX}or/q SL碚$a"JMCya1(^1@pŜAP?lΖOlmxM?GX,t }#7 Žo_ťڕMIcK[{aӪ,!R{;~ $wlTu9߿=ç?$tt kT{W6fOj Zt&-jtvW9؏Ui(T3RۡV}HW2AciVKٿ)O_|BtPhm{\5q0=vi5Ym˝H@w =L׸a1N#T=[q\aCDCrhP|~2Ɯ-#ܽq59H`0YP~ TIbGB^.N<>j ctw ~)O Cq<3֓!4lȍF;I-{Ƹ[ޒ-5f|lI)]!#zV BɆq'[kMlZkc;yvt.DWPѵ aEr]t5* ƛ?+fGv)a)N-թݒDc%㞵ÛT1LT*+OVݥ{ ܌#1˭bٶ;Xn}?4>O.k5X 4ӕvR z}ML}pPyD]@z)8xC#CV єck9`틗Ť*^)vnQ܊8n2O{~W_դA-/~5$zAs Hp%[qUfOǥ*]c ˢ }d#҅o}-f_a|g14>;7gX4ǮQ]j_7>\ѣ.8:Tϗd#YH)fru )݇5fwm~9YM} rۗ$g}d9ɜ }-jlZHmvnXbmy;xm`A8r @+iEJ BD =1>(o\WXHm땳 mQJ$lnC؁rߴ>@b"Gۀ֛j?F/4;-'KfyI5@ڪ9Ϫ{Fu'(NFNq% TOfVI`A[S d2J¢utk4bu l[9b$uDvdoy{:2]@uOǺa${{l^ La ?yH,-踓‚Bj8] yF/ljh\JڹT_SͥduO3}rU}ҧrt1mw1&uc60UAgDUBh(.7$_:0sûJTLyiQ!u)hǕQ0YƃNGn;%q)jA >;=n]k2YA.6ws.J MS-=&v(cS蠹gj*MJOGc8 9V15bjqiŚ9Z,ꁬTNq'ycf֘Rh]n4 ;;hAepMpQH;yjK R80I6"o VͅYF@rഖ *& :ОT Zx $9e&uPUh Lugn4"?]1i&J }mD*T}CGu(YE@@8! H) 9#h)8jJ)$ګyGр0aFtGӀBp-wI,ɒ)bf*/)t&]x͎ʏC2/UW!M}G!GCW)yBW)yUqNF(>4Njk&r kbVSQ Ha?|pF-Z݊:}YjAq{0z׏w$IkT7j/Us BZq:=㪳-\u&^RWZH}mzi΍wi4% (|(̨P\Q`F.YpP5}VԪ|5m_#S߹ O{'\7&OdW^PŒ}89NEZˋNT#dp_ RJR;| +z? zh"~g PaP}:0j$޵=n,*T\Z'f<)A/=PiZˆC!"!D67  NƋ̻1ydL͘QFT;QQ PB蹒LxKhTy@2r%i,rqmJ`/LҀ=R]4d]¹^nȌT@veCFl 1oy` 6d^VqA4$ې0%=`f,]J4!o EU6h@^gMD ى̉no ͿM0)- cE=ZYɵ1!H8b<"w}.D*"$ՙG|Cp1FƹGxAA`wِV #U%X gU APd: XX5 [(Dhd lRA@j_Vbd@,8P. ^0R2B=1c㘳 2 K漅S eR-XUiӁU,ר ֦6e/{5:p8HG8N௢]acHQGs3^ i ˋkjFw6E iN?n8z;:6"Vp2Xh7$ka@.Q>L`$|Hme+V5ʗ-7>:6MBfs!*XR%.Nv٩Opۯ[o ]9cךLP,v2ߋw{fh;ވt򭋫Dy}?Y6lK UR "zjmpŷ$#Qݬ< SBOnƯHR,ebx^|F?IMAaތ6{vʵv22MbZSz6JFR UHMN [N?bpڅjX˟Twi2)<8޷_C׍xAT0 ۇ0B8ݝRP?L+O?{= K^F]oTfrsCp *eƾ[1SRjzbշKx)˞ʹ#}BϥB/mTqť#O.?\N(]fNǚL0ͧr3,V|=(h'9~tt%aH ) ,,G$pGഷDpeeԺRW>6,06Xf²xxv1Isk?[PWÅDϵ6>zm$_D#-#ץ(L)) lY9 Jm lCgj|_R7qmj_ƒW ¯bc e~,A:n %:},%eW,v$P}fJv'P}6!qo&k{>D8d?%"RH!/5v&=fRΏMz>UKgqLUEJ59GOUe6^y'm+pܸRD()#yJxC31F,Мjkn>%m(;ͩQ`RA-2ri`9Кy1xH.FVOe@0Zͨj Vh~ϪQ^RlFBN.k*&ʅHREi aN6fV0]8{#'jG W2ԷRTA`LGBmL|,%VlTv{䎝$j-#h0JXZ+Z2q\cHD,sϝ76bZs l}.-ɯ NV~8܂*q]u2]!iQwt+3_&n5RJr.a~8^p5%FwݨxlLi #j.3~(!\uBr i bRZ: sQJK DYDӂk|(DOQq# x5cWpjF@g ٯ\~|SSTk %t_E&)dnّ@|͔Ɣ[噹g<%U*IfhMوxMWB^ ~b-Sngw=עc3Ђzg`@+"ͨ>-e &"ze@Ok3 ߏ=f6 ̍F~4ܯ\=pп=;=m;O}5sX̘q#S.] Q;W;WB[8}z|P9=24 gEYzͧWP֚WӼj 1[c*f);, k%n^:ׄқdCËex0 ; X0SM8 v|{/Ew_]_?|܌?_1㓿#|=?|ҥۖ&J-zR\Ns}F%*RKWx0 {pa0x…iX.B瓟1AxL+Bs8kz;vngKn;ǂ[Sh k.oǹGCWr6}L߈\_/7]ِf=ڊ Wш{4iSj߱TGӐ)1R)wgMCu+֧:7\f"kvw|ۇkpWk ~n̓t܉7c˪ھcLyY߆#uάpIf6䉲+lpET1皠G,R!if(ǾsCr*BX:XnM[[Nw4nÏI̻nАW6:ũ<\n.x[9uhMRA[h< *24U, _pA`_HNawy|{"&WaPkyIN,t)o;p:ZJ 0KVIfH;k$-e]N;?zzR\NƆHTc2KkwRRVqq捥 \5j¶. &C8mX-Н}ݓ~߹*! }lf7VS>x7*vnlL 7st7T W_.]n Fx;;\mIB3b 2=SZ2$Za=I$B<;]MeSzdiXu} iIH^yvXY`:Ơoڻ!Z\SWNHv$S%$Os™cΧA3))]>pK %`Ͳ8^1F4Zv <_%1f/tK$ {IҌٖ\ L1mVCS6**I$h9KvE,α# \I-ɭFXUŚWy*IO]?hZJ kz 6;}@@0? B|6Ͼ{6@_GgΌ|Y 9]gs%u0'-ion5>P{740 A=H|9u6D.{kcJ?Zj:"r45K,^,8ʭ<}s" mlŒVN±}UHpb5c!3m=BX1[tP\%%\xFSݝg]FHƬ(L3F3"m wn>OKytyX+9yn7aŃ\q ZibU K GWHa!)jHO+(U77gNw_̳ }A ׯLm3+F/>�gf0`Xhlݔ=AGY6g7jFJ肇6܋quI W xD SJxk.c\5Ӣ6zbt+UMghz)gw. mze>N/[jǕb,o)bX|1KwN<RР{3{;۩?A?M/"N/<_.b#rpf= Mͥ9x+:'2&cˤ3nKC3<2/Je6t6Xs1Ķ~rZ^mI^MRݠNӭU"K-Z_,}0@W36ycHo 鐃cT;K0rq(A:[`!K!Iz5vaAq F&fx=WeA-^je8 ^\f`r,cB0^p}?tbd]@s4V}Mk{ /}Bx #]䨼!~4MK\ 9{w◾[g*WԧmV'=|NFWIbT0HMXA?9ZiQAKß ŇUڽڨ\khrˍxuFG:U"U2߯5E6Ӣ:ֶk+rEere( z515"\3* #CBzKB.V9\ , zЯ /鍝߷Ȳq^mƽw*Ͷ~z߮FC-pûG߰$sr׻χn[ ~CW7Cddeƺ oA˵N'ןv_˜`×Wo"1D`j 7oӱ1gk oY+kxSj"J2)~]@0TdʧzεfSQBvJM*szKA' yK V&g b!=Ģ :dB ּoV>ǍLHw\YtQJXq\-(,& NK"N\8#D"N]7H~Ag Izw*$81g?iusYgJ:&N֧z0|)^}Vf =jo{+>lA y\>Ir#Fثز:-֠uеOv=ʧw S~CW8WOKJuJ#&mS60H/ gm4qϗ$R ѕ_'nQS 棝Z%"];Z\]XNe4+P' p=M~ȎWO3.\6l|N5N~{SQtTmt)N .2iRKɟH$tˑxGP%7 fBy$*scswjˍ:Xͭ,ϩ@: :CQE&|ˎ=.}/4Y,_t"%1OWɕof#`P\B3NJ9<~`X$BVbA1U) 'gDTvF[.Ԡ'#!##3w+t_lk|JDKcyP1_8oZg5ݬ>fյ8`Mlл439St!ǵo<|_l<Ĉp0o]21X}-Kk ]r-+*-T!0bJVQqQ-W3ޢj=Gz* D<5B£d5'֌5i&kVM7D[S -)0n'Ȓzsw! JL@>8WXwzFS`=5:tfA'rl1b aD_e.]`RB aNS(&Ajh@0m0ZD*) D%m%sAڢh

)2 ͌ V:')[m~;dnh$f7ΌJB/4UzL c7^'/HJԒԌǁjRRl쮦$31uz5~єț bUm\Jj<̖ܫcA*EmYw!i yNtU $lt.HS0[]z l6KUz`:0ϨU[NgU!G]Rkl-T"K](bXGؼx LA>Ȏ K>\쬲J_5TI*q{Ҏ$t.{Gryjɨ0.ٶsU:QPe֚,SXԦEQ dg_p2ٖ/f awZ!]d'eyܮSXZHۖWvGb5l"(Zu|4 QZl>:=(֔u|^qQ3'ИsGE3W6=  ;N {  :&MX̮ݼ)hLaݴ s::,C33%ثS2]&Sz`w2FCS\ZpmB.S'egt4``E9 ߣt;s+kt Dc7dtK+5 ;̭vYKRKJ2TɹL:O|"]R 0vt}])D3*sE*bwzOY><*P h,71,Fbz ަȑV3\~V%DSac 3d{3k5&qkY c$ՓS:O+qNQJD؏ny^cĻj:_4?kCB9XRd8yXgY+Vl}&#WKG#f~x^#ϕͧLu,Nq/xA?}i%лsjRɜ[u)WD *M[ۍo,H[`2SI&1ezhX:es{(`?BŞdk NEfW3GɒAx]0ta@|c~ pM;FM"-_^4w2ėRI 6>'А[ >ږW\ޘ4~q)J8=5+Ӌ>}3>żcO >5#~r负x0;KcD\4 ՞bzEwfr^)/1pu.P/N>g4ߤZ ϼ> r93DL Sj\:-|:TIĥa>DR(GWSn舝& .͔2r+e`0JZxqddʊc<](R0v)($Հ'Vu#$6QHxvv? g-ix'HS+ʞ?S",U,%5M~j1`gdY_=G!*8`;GoI"\G Nv>8ɈFי 9Yv衎vYBfVSăO}k6`8% S.l;E H/O^ ?ٟ}~­YLLK!BKS݂M1xz-)E)w(LAcj!%0feZ8C4I͹0!^܌jօ-a2H/!xUO#PrE=3n̜Qvk(BkWn.i(+[Ш8h\x>?! Z$APw.ce׻f/`J54h"',rJ-Nz Q*?j~x~70׷~bQׁލ⥫Cl~ߏB5H՘U zq5?\A=4~ ^xl~81xO ^e`]f\bQV* 9E HAěUBcqAb}uHu讃,Є3E ֤͔5'-tO \;sl{z^nQ/^'d ӥgd**õ$XX2&^ׁ:^d"_²S6rIw8vq+wi$טCr_h@F9|s֑Li/dW ?$zuSR( f:FƘ4_K빖Jz BRQ솠u!9HW1`$f2.XcK}P Dfƕm)?p/~Lc3Jױ־~+A{cgл4޹w.V6.AcAtcp.V%u))R͜k} KqX`2BSAմt Y?+*0R@-Jl;6H*%4! ߜ?/@!f ,*#gn1Fx[N+0ΙRY -E+EPXx80`!\CDEgLZ $ZlMDP0`< "2m:lRq')Rf>P%A Xs $o3^ro )ݮ)!cx /cr `T5iR0Bf †R'tLxs,firT2WSHbPm39p _,&C ^K,Ds INPJ.Tl *1G062ADBU620&XH1@H[Mτ@x)T#͂>Q̂ƒ1uZ d7 1oE8H!5բհmz:zMCaDɀc D}X(1 +u)J%BKoV x͝Rwc?@:x_0Boa xi1*r4IIa21Et+0k5h!m(1M1N9)MФUuiWLW_VGT8oFg:y$q`*U3N^1wlT?y(ro 'Y !6`}N.3zc@\B修+'YZ`Fq sjqAPGV"Vr. <{<\rb` y ɯ ]ILoGS_ ZƼ?U!MoFSآyqgg>~(4r͗.&kȋ7I ^5W4r^~~)-Ř"L"CZXQG%>:rqA0~|rܨ@B33L/ fkCV$`K)a~%~rYߓEpRwN i<kkAutd{wcU  7w{K*a9g JG>ԻuV 2xE/".bYͭd'vscQ~<}2cߢ^O{R7O.eT+D|!Ds]}؁v])Eq뤇2U o(bzƻSs4~PD!ޘ(4c#I'Jbl \ڼz]s#7WT~ڒ_IWeeݫ})~f+Mddfekd2Zl@<\\i\|pBj];WXz3T |ɋ'/A3LP*8'3GEIt!qFaI>RsXwːRR/}1҆W?ĵ*Ǥ! 07\PNrSQrc;)aw#l16Y@ɱNEAʽ]] d_UA[xԌzJk{/Ý{`m wUUzq?J1I;+|>.\&8؟n/M?o+u]?Cp,lxXo Ϊ[UfqXH.z/3@Lf[ܘL=魤Åe'8STYŧdWH w>@%`jJ{|խBιxx=ѷ*x䮀=r@o!W꣡!;lKvwx\JA ]lu\xcqKK[ܸ<INqG%9* F=`Ɏh_#&!iǤ}fN/HW .IXkGD{C,֙ĶC_?\Cm)N1u#n.N::a{Im*j$@ u0nF)ּæ8˾8w+}_/}y.mZx7~'ϫOLцm37zgx")Ӷ'FLߜERvw~N,ɫ|,a90+O&vjO{8Iy Rp2Z"% UmV %VumKli5MDxJmF1cdXFZ%hrCyAq5YFdYc}9,薎8*Jv8SA,9?Vn?^_\؇ۥǪZ[.§Ͷ]I%n̕A%t.zkxׂQM}+vyђjYh>߾Y`V2ׁU}hA>Rnńl` a1VQ%}r+ʨ8kџ 0Ṁw 6cNrhTp?~<쇕'Uhz(s! ,,a_Pv:C*`'&*?U)(R*MYDPF%nrȲZLyn&!׵n>ǫXzl(D![ꪥ_Znsha+$SJqa[S!NAdkl6|6&[+5#>ϕq;^ᾮu9ڵWph|S4 4z* '^Owcŏ'wj޴;ۅWڼmcA;Cf r? fڣrί\L7MJvaeF|Z Z魃G=Ơ1~P?~7d};E g1QF9QC6`oƺ xZ:}ݰ|[JnQ[bطO|@]0 4,vSbJy,UߧɎl_t䶌}H!~xR{ѸmwJoW?j4+i^ӪyIn$M.dוtsq6/J |̌ 0Yr9נXgYh_'gN'YPcͺe2ۼ1IW F# .oIp!Ome+}0=XaGx6b֑(d亱8֡;`A̕%{:)1e=#; |]AJvΡ%c%5_"V Q|+1 Z.*b򔴯'5OGN(>.}Bo)]O\ɂ7$_m!mX:昂X 9t#KhWF1FqNCacju+Ew']cG]_an'bD|_5O*zgGU -6& ؍Ў) J͌I=VC؈ݰʑR&~wRaN>%sI8N8i"% QC[fKBjof1"0hc5PzbTR`lxJKDhwWbm```a1?fwA\Z1b>17eKf3|d};I%+f!b&cOq\'4h MkV5U pVY&#M̄\`1Ȝ0-j#iY4JA-RD̂fdAq@*{$*1kBdPZh0Xv=ehFZы:f5!̊z'DK#$0S: *eAc1)YJ4)CxČ,\u`W%lFW8SO0Q5;x5_-6D n.d ,Ifd8Ce(Ũl;]m"m8krk\g{M#':2dD͂.*#1R(rk"1YJYXq88ZD7Hfb{[:gʳ!k`|3{X fYi&bxșbZOƍ͒P6*rry] f¦3LG7JBqxZ ˆdoo%z0g[ ӧ|0vjgkgSx1fbKk,]eL pXѺg,y%>DBg > Zģ3 d `|,>]rz6GUOsΡZׅύk.8RQM.\B*q(Z5yuSKran]HMrӍ'WMO'=mU7o9\I_Ҝ/\)f>|I|ON'̝Ozi2Mwu-*OUAh |XӌAǓg^iN6@OBD0lDBzu:6 pdTfiBTʢpZAYz|$YK [Qb>"H>)!يkYǘ7&엳7W1}d} F@ڟco~]y,Ask6@v yl"+NiFa-wJCIؒwg"ͯ4dӌ7f?v>fKңhK$ޠf[D&i@@Ynuў14QD.HlF,2ع3)D!H)9I'N d ݦՕm b2ôVYKeEPsYh@( D 3cc Hɜ'dBLo6eTffVmG w"*aF<1x@h{N& =Xe"fD469í!b&Le,%!b[VV 4X{_9Y'f-kp0;.b9ZHxbT'!46>@>[2;4\C3(s?h393t5" CH;L1<@̌Mk)$#9JɑDc@r%yQMm&ʊg!af+1+X7'-ڈb)i"e043\" RY%!1E R=XL^ ˺Qj ۂ0Hwq|(Ԃ)둣zfZڝ>7(i'.сgzH_!d~1ݱ763eBZMQ2Iil{IMMCx0)NsRDv8֠NvD0SJ'< E1Z7 5kw<U[bIpKAI oѮb sghg3izz ,td6du[رˋk1ԕŧ{. D*6WsLH̊[fBmhpѻWyVSSA'z?cz&p)-0`ZJ%âU~%̦0ۧfbXPzjvƨFzJVp _P2vL,84,XMɣ,xZ,&.՘pӔ!%sZo a{?mCV:*ꀫz-L; )@eK#\)y?Җ ۆ(mH[ 6@oRNU.}T81a!>aՑ7oCՐdEӋp3(Y6dd0DVa.`'B{#$V tBZ /lY6(+p{qV[vMyY[NIKLs`rsNQ0ʗRr20C^"iވZmFmMz<#_Fz7q!v.eqwDi/\í$t./y]EkҞB¹zJKkfa$90ʲdFmGD(5ZE+CO, qLU0K%c)>$hxofYǂ9 )UvU;q0˜c36 R{D,t}-1^^E2ze.uT#t;( lpPZ"V{kqApɹ%HSⒽ>/侼@7Ԑ@/9;1!bW#g;75TD@"Ti ^v;#>.dY_I~Aleb?Gs mUU&v)]r6ُ+h_MZ0M:{+>ձqYHO.-P.ro$!qm/S1MRcwz>?f9F]3}sd&W~?WJ1#iFrHҗ0y2^Ǿe ף;;ri=|3jP޻ZyW|X1ۯE ^&{;뗋_2:`ɘ}-FnOo?^’ܜyN‰EXoIbTæ;q0:6}~_Ѵi3՗Τu|$ѷNK0IΎy5"s~^}` R $SkRIb Hi>?#)0b&z5)KY51HyX-{9@>c47B$H#Y˚uwWp033owid#Vz䴳=fQ2MY[]~û 檙Fד0\[l:*A~9tzItJEd>~<{Ʊ׀"wGEQ#rl|pMÒ" R0Ntq%"^yYyĄH;Y`AE!KJҫ݂V :2GaJ$A}˧CꮺOp82UYbǩ8cv?{Ïa5|9߭Ki$]FܤI#˩Zm 6ા PMÜ+ q!6y0ВS1 1^r\zܗoIGb!RNs%~ksLjLWɸAQg xjИaE^ qXB!Vgt$>aW?rf+e /jn |dƏ O?\&MoJ+A4>JTh[aoT36'bĖ%RJkr D)Ֆ(IX,\^^,:mgu|Ζm>D>!Aq ͩ#Lz"lG1'Q66TtDz,FmF,"j y,[=W :m@ڡsOg$TGNz,gLt^wk0M23.\m}sŔW`ĨmUXZq? ş%&r~VJx0wwoT. ǿ%'Ϣu뿼ar=f]EުKӸ__PX4뺢ҔY28?H  yaӺO^.0D>Ă%e<{L]/FѼ./DN|x[Dp$kYGk^E{{3H!YҟB>9"N|Oq&kAzQOvTfL!6:'o.wd#ŲV< t%_' -$Q``v'L VŞw<3k4ƣR WaͮB(v"Q}#K=ɾ=cY")9Th" QOɂ0!GJL&6h-3Lf>#u{,q#C1fo̲㠈qǔ`b &00ь3([F2Ji^B:IWQT}wcR;(At{_Eh0qnQ7SBnz9FH|JUjwƆLةh Qg~qc?n𝭹뙠j;X;rخt6^A!PK9 idXm9 s0Lc,8mg;($iTD*{(+:B~[.[iy{ 4\:NQ:7LI*1%0ſ"][M-7hEm`F0l;GBg Hh< >d\JR\ӐkW1 螮M a%qPR@;RZKLHR:e5e+n1Ѷ uZ0|3̍&/a=6$H |L ?Hv'Z+0R"R-]iڪ1*ALșN3ay@`2V-_ǿ>|5%C֝9]3ۑAF>Z'$K创HÍO18,+[r4/_gr1\Torג (֬>)4%o>|YBu,sqYLx/7:K!oC\{`CN7}JLNvO4n>rQ.y0"%邎Gfc3q!%C`%+{EBמ6M O3N1d.Dž!4;yKKIfY$`Ak؜ ʸ9Hzˌ'KA &~;}}2í L| 2.G0%qK[22'CAqBMhz;E3.+><|dmVkC\{ѱc*y?}6Hq[hRN7V2^̾Z#o;e&8A&=[c]γ/)uy/?f6F^qnt,q$`B1UP\Xc%BֳVp K_@'z6OfX!)Ǹ`: ʂ.ꬆoۧ:xOǟSD'T#6вTrd!+x[8kGp7X>T:^m^z40Ԓ[<,Şwn灭a'Jf`2fegEɁ8^(Bp,`x̍)?p0u3wbm(-d)rl刲pxJJI Şw n#!z[D-9'@>&98 H +[kGIjrsI GW!Lg 0Z<7jipi֌NĘHI&J/>/2"SdsԜ][Qמ8N8ҜD){=!&"Fu'M=A x1g+s+'yu,<&M&MDz,4K ztX9awvah pj;о.iDZӈ@=y(L)K۞nj}V_\j{6c?|p˟ N~VC%N dVGGUީ8{G(^k!ݭ#cR~\T}4A5f>EYތq.2fG'ת@ q< 8/20NcgMgs#xmlLkd5fōc5t-]er[kb?1?)J!rpS1Ǜd7/v2h*;h:]d|BHJ&zDu# dV(An:D1F0B\ϟTATiyBI%?{[*kL-Dm6ĵa\9 }Z gOW.F8j\ڟbCTu,d1IvԇQo{޶e9!^\7v[Nk X.YR$َc俟YJHQK%{jNU.hEL6ƀ|{ 6ޜ.1ye`DHБBb/ZBavc/_ r  ,QZc.>'E y F $39e6Ɯ Db), z(Ny,'WX(R_쥠֜n.]X|2i B28_ "_~`*@|˃V2ol]Nd+H|LVγ1 iDҜ[ 41^B،ܩ'U%Pu3ZN*Uࢆeu8ܛT!J7Raa;Nf= 8_e̦"Tϋ|QhBh;l휎!Ia/|'u$2HUL烃j +n'O7(&2ٻ/u[.ș$Ih*;PgcD-EN^o=X5$.㤥( Ja}G )%9iz?c6ť@9~kyzY^Ėp9sE2F4PP,#m ˬhj*#c̑[Q)̡lzmx)ʺ|TINJn<=tnD4łjQʴlPDBH5N&zdkB^wCq),Z=b(E ` X+"h!F%bN1(^N̳XYf֨&[U"IƘ:gA)I(wT WFM*'3X%՘,L(Gӡ,FZ8gc-U[dE\.G,[#J1b%rgE1X2Q(W_2 19" hcr'}y^(H^SJh05g&S_hmc('DUAPNY}%L`9f"mgTi\據Q3КY%D40y_6GS\#s( ZȴYw!87,Ih0cJa 56jY)Zi%N!L#1nmc"CTIĠ1/_E_&蔧 X8n4vQ1QlAp!TP(9F8*iɣ ,|\Nj"&X?xǧhT$YO,.(WЧÜsOOa{u"OTß#)}z$K \>=WlާIn޴Բ]1[x}!j.̹?j*\I5FZ@iqC1;tÈƧ& nF' -,NX7FFvAȹLW7fٰfYA}Xl,v-v IJnlmIEMlD@&_ӄ )\(_XSCT_AY!0'O`0{YT*Yel Ec@慍RSvFGL H JC$M{I"y<ݏX(hB*!uuߗoy(X٢`g8ND0\\a8`„Ұk+0CW Vg}sE4\DcP68 N;ʜ+IDKKƣ؞$"-j4\z5My-ph;3~2>s񽫈pI,"𮺎,*"2=8}U#qpD&YxK<X"0mȻP`DN_ ʰS%p>ry{d;9p 8vN 2Um%V4?H"EZYV`5n[wl\q5/u{h\P 8r̐!L#-@Z%sEw^|?SɊ ^8s8.3@r )9W匌@,JyLtjycYgcѼY5G)08չVrhiN '))ʼnzG3[g=S󋍎x9qd862M}4Ƙ,z+gN=LqM*9ah8 5vB@3hq5LsH]e>.5p`y@j"F pylPl:Q!TبRap}R5C;+/34X|A:Рȣ4`LJ?#\a[ :(A"yBoE ۣd0O?sK]uPts ualSQ-`WG]_&*GyAy?aiH e {KRCj2Xo( R%n@ cXw_26 l['mr:~Po~>ug#3GUD+ԑH1QhzI&|;"HjcbFI-%)AfI )p0KYMZo=4ݽtI+fR]o*JmU:ٸ2Pfrh_@c&'.7Ewr&_~Lkzk}]UyA#\7\XTa ,uq)!(Hۧ?Kd,'\JBqdQ*Ӗk>+@A09i(Q6*PAajq< k5i"PP9J͜!:D;0dDBbV$#6ޱe6(.ky^H+ŨxLV+-v8y$Ӽݙ8goye꟯6v6:*N4GX)Bpb»\7 we&~I"NJk&*1 (DQG gvx14Fq"-KbY]JHfv7.RibD ueILM%sKTQj,s2E: RXYذᏄ卥D w;("T؈b+ ؇[WdyULIb8lbǬU$mJJ%Ƅ ̈E+XibT\mRXt]&T]"wD%yHwZފN +ҹ+=,ly@A(ѠߵOθ`Lm}VCiHJXkU\pT*WʏR Kf@G=%A$TT QXk\9V԰ᱎ<'U(ŸAPd pQ˔Ϭ@[e; ,pDywK1t(<(E$쟘~OG?o܄=Vb)|<3?Q'pqL(P"qR K-Nbm cFˆIy h!Kk/i%0/fz;GQY9_E'&'p4]$]ۍo߅޵F@:hC^K{W߿j'[̓?7w޼nezgCïio7vw^oϞ!9Λ_o}v߼`Ljzcs`G/~K{MdܙwϺюv OӺ EK]%hm9=5/Sw:]fwR$Ҙ_}3wY3~d |{0D(F0/{{|•i*e!*Zx׃`0ɛ~3vvt^ 6~qWwW'gqg2eٯ۟Az9Rw"=OVgٟ3v2\y1{ýܤla`LIsF_+Kxi1\32+y-[×{$Ym88\_iQw̿eX8<p2%p=K}~@:vWn#xe˶p*Wm:{i{.{twbr[OO^ooNz[ލ'k{K34qzvlmdJKۻ٧7Li.7E|{=eә+i| nzTӔfm48f6g6gK7G׾&eg=o?Op:d\?;i4l5EpP7lqnF\+4~!w_erv_,kwƺDͣDk,zKvZC{-iαтM C|#M gF.@ɀ"1>i!|Ƀu?J̴uw߳ېq6V\տϝ ^>JIsierW-SP 唖,zQ4 EYVU#';i`U pYM{0u[Ĭ !t3e%Nw դfGAo{oOԚYʹ"Vƒ4)qZ1Mi,qa:Oͬ:P̪/ǥ"48TDHF+[´ B+[(#;eqIp:G8w(MoAL-Qlq!Z"PlέJ*0̘Jrc01qIgoSM3S#jߺxd`˽֝Nf1~9fo{W{/Ύ/M׮%.<{5ɾiͽhMc縝yއQCc]9qJo*+3]ίM.|i|Y1EhAmdvMT |fj5h)Z$m5h&ZhFk@>(wq9H /31ُ i{Nu˒%Gj{zX[])9l "|R0QMyMSq{|Ÿ*5xb?{R {^v CnEH},AVΧ&$2h H^jKzײ]7iPB۸ZqVYS*gWjUZe*Uvh{{eVؘ:>:K3|qs #\%k?9C| L~0VMg52Gn{m`@YNe-|ӏ[Kh\lٺ0 t1`1RѤp pW$@w|?.mpvմ12 .TCU5"Is6H|շ&0JԃE-%j}cXldZ:_a:,^50 MFc/faf +:m0$]PrΣrk54#)d.eKJ9t`ѻĆJ?wVǜ^B2f~!cb˽T!]<^i^S! ~ˇA x>ڡk{F$d,)~u\\ui\uu9{f {Ńȩnpe*FȐU Of3iAضP>M+cCVMq5؈4~v/ vyg}6üQxcF ThkF1lvkau?T6ƃDbmnĥi(r7^MAz !"LI*$[l2o윅m]-r5~_,υJ%C}Byۡ3?ZqD[7 `n"Xca:jMAcН3)b aYtfᡌaY_oGjȅjRO5+8+]d=q썚.HثkKV̗f@og Zb_;0]޴}#@4Z̞\'#υf ]tZPBlŗ!Woq)v0dyO¬mĵf$ccq/̅[H۔YɶoZ5umh d[4YPht]Nd"jU3UB*v7Ǹ[|9H.`W;o省VDiPU85U1SE{>5F/ԊF+DOW܋߿l9!p׍_LcqxY|/5M[x9̭*N<:=d2[,L eP-8Ɗ2pٺ(>Aam 1u{1 H2GX#3=9؀Ka@5KY-G_)N` ·ȁ>qkآw~sקvQat[:֋bE<>VGukUٮycdq5}bM#F/[O%g  C86 &·EYK 3!{ ̲&*V&@;_Arњ'D7"a[HoO8>Dnd*Ds-t5Qx)MYåb]/xjy ;GQ^s閎5@/9@}i;@!r3s̤ #]ʤy `AY1<{omi=wyioN;_l?tQժN{ ̤mȝךM.tɫ[b-*:80KqDX6 RVɱ ژ#ꃩXЧVasX#E?7oy\ "vB(?Zer"ᩕr|[G$8Z]Gԃj-!}l#&ԍo떆L%F:Zu^ksGEȸ0v_-FˀRӫ9mR)9k)Z⬥8k)l흜h~C}_Jq<r>y,y*qqQW?k5x{w4_s :;w̎$"G| h|ݻw?{_53;a/ |'0Db4/_LJ?ߣ{kعL$y\>.#Ҏ?量oiPgKMeC1P!JoщvC%v7= W{& b~;;ggn﾿x0EO/!yg07Dޏڸ/5 0s#; hvm; ޤwo_qo礭Zg=vF dݹLgum:Va؂;T{{<7և*8(0t)_k 7PXb?ᏍhB\͊Ms,sYkƖ0T%=1Cl46.+א[X[sX3[^z<|4{,a(wnf82KOGc/ ead [%`&9j>:9z;z ƘR0ZÚep䍑Œ,7Kah0G;?d/-$mʮ(M2k'8 [L(ek()͛v VѰ!e@m/wz%hq7diW⽍/0׳1|vK&Vl F1c0&Lplh:ÑcH&tdtO5.fLБcLdұ!I.F&cQ`hC^">m&Qh$\VHX8"7^o =CK!IwC={@R6@rGZ dr\>L4'!eTwx 0I{9,v_y<6q 0 #ٷBv7u1./>T; ;ȆKrCe!K>!g]%CrD;DGܖo#,$UUN_½LbqR=}6!-3*Mʃri1zݕ@1`Wbi`so-5HIkcCN70tƽ 1A 3W;/y]Uw01?NYŔWru?4#}I-z[34rsB)ۺǠN&^z,Ť.Tsj(b-[!tm?,0c7c+fWuW xg<pEmHJvOjwv9ΐz$ [3]Uӳ>SD>M 9`3-J5lndvM!h D*HK 0 $[T,zPJB{P 5DY.N//OarىɮO sQrlFǒJ"^1HM%TP)TX/@XxIHg&-%0{Q΁g!%T9\M|n'*IMJ FPXHB"sX]Dx1YUN)$Z.˱ȴ|Ůy~dZ6r@Iߙƣ~@v !Iθv "Dž[v"T%cN>v s#.Ż#9/H;ɲ=G2\-PB{IKu8c~q1+em p7ΰvޮZ+&og aI\t!i˼p˜~gB 닒'  -iy@ңGݨd蝌Gd@?Ye݇&}b$y B1*u[A5M@QAhH8d#7B >2vP NR`v/ֆpt| }8ў+|~&Tihš M5Z'eWA.&.V Y 'JEbA $\o)-9^.od E2vbAj.g`ۼxGrНU@|Tqd(CBo3Avw-ְfʊI]xLJY4[wԎ^P2aҺEFx8=#*1ܡ\ J2xkR A;"% R򎻝Aiz&VC,@kF@ִ]BMMMB+kz@^ctBXmG@_Kܴ>dC"kUGI@3DRM"BiGEZb2.,E`me(9d!ʎғnqjK+CH|YܦuBZ )i;D熯@ XwUdQf9^ma]rtJZw.[%7ˡG|u.7mo"lͷ~~ϯNK;߂='W|c\Y5l *n?,dDdѦyԆ?:=|3\}S}COp{'{Yߔv$ ~(?? %g.(~C}>=kW .NyܝsXˏyurӪfJA89/\'Bn_AjĠ9O7FDH1uM*hz BVv]aqѿɧwWg|}^'.fެ_\un&'S*F'+a-Mi~mkqv:;M&~$BYkkkM ztyz/jRa#"]Fy -FȚT)&'fko |;v>uO Y\R0Iy1}۲vi w};NͪF____oMħBdۊ yh E+% (Yj//x™ǻɝ7\tD(hb"_@[3R%"jփw=Qy4- o/Ն#!kx5A0/oփ5zwde-"8͑rW]!\OJJ,=M:۩GRw'N۹XMa,#/lW͉rN Fc/Ii-wLv׬׬׬כJ{3SeFd֑Z5d_'(J{XH?XiEAM ¦| 踒JY؃*cV% qd[u v{pJ7joߪȎX M&n9g]]"XId2Ve2fZrTcʘZ&]#˙Kk Tπq@M!}t%fT$-Icc^6@qAWyNLP/PW<B3|׷Z#M^!CFXBi1 Vơ&瓼;Rh5&OMH[#Ӝb(<ɪyHx 'q{؇ך: x,G>yPbID#/GtJ?)07VhX7G%A$Lq#-{$ݧf 6 )>NԍT5kFkєly<}AR)R^!JHj(6NWCqyގzm<Zm&Q̊QV[]v ;J0շ}%]5[ HKJ,,fҥNoտʦs"g:gC@IBFgdC=r=YPRqb I\X&g2žա pt\81+bb;V[ܒ&lDZ]_[rJr3Pc@U)HJVilJ0_mr7Ki%gswmԽ@Rc*(BL( [,_9Fgm\OY€}:3Wn2鎮285ݮhUK%8]Ivyyy7wf:$U@ *G钮UA&WMBpIvM}Ѥ_S_>ŀ}jeAWY5h-+YM::+ pb}SUZ[r[=Qz>̛ + Ӻz2p1kVרϟ>)kn-3n}=ӭ8YN?k dOc".ߓJTZi[m?|y4̐0ǻ'͞w|BOfd^|5^:  lRQJó Jvbr^Kp}8|g[CyN_,l_iVN|`Nvb}J6ןgFR.^: 酟|FA1sHS ZlH>76Xy8hyyj#"㝎\̞w|K栗{g-t>EK9dR~@aJޣɍω7~E~b%zrmaHC:ykeS:ph?m-O(n'tk>[ zn3(.$A.b֢欲UPK0: ZzM}Ѥ/ c@?eٻm,W~6sxk`H,Y wyD3 " @9E4ZR݇:;{˭<$kD_g.1!J0tKX㽍rĆ1\q9 oZP*xzt`t^KC bgˎ$=~JD5uy|m?zJ51#Q)c IyJ)NRpb7UMg9OZY͞"|a^$OĞ_X&DlL) zc =v^2%R䀤/!򡍆dW^ y{dh8=Oj15E|xFְ֡ƀUS6Fcd`)1?WV|ٶ@NZ{n6zC5qh`hǠj?nQ"F06gղ1GߣIbyNë<#-=5}De165yyΧ_W{ǫyB4|_,Y$l>[quZ뫟>.|cKET߲T7-otU`\U2{,oeut) $IZZ IP]B%(0KS1RD!MOC͠3J  AceEeö-Vv!U{Nhw]CZ81(8+sP5 U\#v4&x^v aRjjT/3~OZk)[\!TkB ^aRt 6UO]݋9ˮvUra 92&%&r(eRKn/3D-լT%'HeJL(sSq鸎Dj><B CW8qqv"T Y}Rlԉ#щanD锃uSmw!HItp1>fSR'SAnKy}1Qz4w\&BDn4MƫFظ'B^2ARFY#jYl/R]^i"˘ϒR2) UʢT"GkIiXv- 1^Zx{qeT4_N j9sI*c ?\PI8ʊZdn䁺.Q< qVEzo\g}dxэl9[WdH{'YY:pSpd I=4LpJAŐ!Y~,U5\ NqUƉ H&<+~G]{P).tkUiG q1)R !k{zET!%U)c:Xp"*n&͟߉t,;pgf^G';"ԼA7Ω || F0plFg=!)Z8 3Ѫ?1%w94MJyJNSY=KKef@<4)ʜYYy%fNdK`">5mGLtό"C :zh2~Q`F@#.)`}^ٞ^V@m㸺I@7{w K2-^)E\qe7NsYHlKbMe\2ɈՁ)+(ktk8CC_Bs1$ N!:N|قr%hMszى4hrՌKth*h9lQ ДY?#V$*!E^Ȅ&i-ɒמu@7r9:ȁU!R6ыhƖ7wa*"AC|ި1Um: KRB1V7]Z\bBqV:]^Ҳ!"R!p\ ؗ fiVbL sh,=XeΊZ;}lo{rexӶ h&TY V@"˓ܔ,MYa $Q*9oEkk[ꍥΚ}Yh9o(>i+^+tdAQSo/;FQ}1$5C>dM|wun\de{#Yr,7v#ctX+%,PFF/YVVDэm*_N[S(b5KSi ʂ#iJJDXQWM;FG1kZn wIzWcq~}O~u'*Wddgo?w>~H,02ɉ1B?c*:|EaE$P[l[FeT"^Zj*+hij}(EFyit;IG5<]=suar&$|( rmTaOXny>}^(CnL&:J ;g+!2n.AZ#0F'%%ixǫR8`==H" M,T ao@Ys\U(*Qg}'V#= g%}Xڟ{muqOK%s@ުyKzGͿ*}c_H=H^eD-[=Y/hŒVź߁|o7ԙaf9_JM΀iD4%Bf H(h5z?a1r33Qm/{YgcnCN[:J/pl+ԟp"@]Rixԩ2 < tH)F-NPCڣJw?Xun h6Tڨ;`{1CG82!EQ;!/z+ݙH}vؕ@GO4]"/8c9ȇre ԩ_2]VOU{H%ӅCr-d5{,1f_Eid80͹s>m'Y/ٕ`gZק1ݵd%SrΧza}Yq&Ȑ0֘މa'F3dgݽ]gRjyW"I\0,,MWK5; ƄQswgDۦ7#7sbPblȁ2B$LT.ePVgjt4}coٟA|'1qb.=p6LzB]A GM3%F zDC vS,ƸOL11*Nq,UP#b:_tKh BB.@"Z $ \ţUZ Zcj-qk_W{ǫy;kfO?o?ﯲdljU~<[,fW}}}#dzշlG_=?CqWrUIgcқ̛~un@JeAȤP1hdR!iƩAe@rL-$,g9?I~jFTuwJ0}Oq/dleu%)EKr_"҉0{q+{[[5<<޿Q%PtV<}m4rT. U]7;zIo@Pl^Vx}'x.l;2 ԑ'E&xƐԤ1cNs#mwAuxArw9ڸf7Z[zu]h|d!_]ɀZa  !€Xw$r05c2psBք-cT=z2ÐcpPXDIuRHRΓyf^6 Bj+'! 'Sk&9ctf`1fgG~1 P1@U" ca}GDqc;K;WJwgR(wGr߷:a=I %8,OJ-9Oxi Lh!, Ny!SRB- p^, cLxtO+ȄJ 0&pxyvQ1r{"#[}ȼ\E=8h*̧lp!B%ni_i.3&NrARJ$ϬT; %EϒOE\ZVJZ]lx"] dYps&$ L򄘲LI1ed-.o0|;ˇ=&K2h VliRHAraM,0SɨZ ࿭.v!þHPjp*NӆzvR·%a>NxzTw #&y#a;@0sz>=DE;&G &oI5DL;& 032'6\N|"ɑ j7 '3S5UYǛ&AkuV홄N+k'ڔIj%(k5,h=E™:Ѝ- -CZF0F4g<~&EsqV:|!4J!xW83|š~q8;x<5ZttìwV&̚qi!}C% }zo1mJg aAyo1o/ @ʼ"1ziAa<.%*ІD*ka QDLXu婭Mp*V>0qxhxtr?tma\D:':?'#Mnpk8JW99(AB w87F>|6b^($ ׆~u&*lgBk<\n`(ǘv%s{(0A\{=25 k|6 /@״-/]"Ȧs{2;! kI]aҸ;q)m'Z[$hy S;T};\NhZ+0 b_+3_TqÈA11Ad0E P 1Nh%MaNmud:2$3,x&KBhY.u:vBsA\3[R9ŕO'򈳖O4sڸ'VrpNg)y.7q8 !Ux0h&ZF;w ~`{ׄbJUwgereg!SL 0"v?O.żSsp7~LoY^ʹf 4UMޜiݸj˹JQ"xX6HkQm1nZJu^p Iݖa:^h1*6D"a{a f˻d#D~|k .#/V1˅2wqV]%[kwi٥n&o?$Ht7<枀ET4'تvr| D# \t2+Mc4¸x]]H ҍ@+Ӕ-CQ:W3%vͰBZ̆]cEWK4՛%_q kNJ`U;maJq3Rӌ@EqaaGMc3StΤz$MmVS; [1 h|Kv! ( 89ݾRmOn&R_h6Ǣ+`߇QeڬM|BwClpaY+(!ܾi=b*]d&=BD+(HhNjJIx,vaJ]2`&R ?4l C܅*ne5 5Fw%4^;N6sQ !̻Ueh[GUw϶b;A5 )Q 2QD6,t#WP7L.|y"Wh=YSBŦ#m7X%n)q\!lD&]v;6nL~kT8fl\ȺqJ7Z-ZW qh0U\X.$%]]T! ApkT `!4J]PvHٻQP!E4\t2$BrI LhpcEC7+nTa < u"Xf/pC[Cݜ=f5<.%~N:v9Td7r\{Ah)S{ -4$7$hu`y@Osu1D`w2q `w\Ή. u;\ eHv%tU;GZk>( $Bչp|C%vyP(P*ψ_Bzol;mUNj+b )XּI%uZSi[ބo!+)' ^V+^wIea 'L<<q70•U9&rMTф& 7il#N>'@75O}AI4zV˱d׵ꛞ_ >`=Knmsyc; gm YdȽ"+-fҽQBec{ǯ\ 04DctNQ!jF CZ!H(m-m1.*}\ Of:?(6u} =w mAyŴNp1 RrfppFW]w+XX%uP&nl/s1)H8'Zz7Rǜlz TLpIC&D#.g":仳B+S/amͻS:7ǜM=9/NQ^"Ipfp:lpWg~JYk.=Kk 7@ ŕjr ,TPLʍ|n ;D_*.HpδP(gYtYץi[}]_ѥbYs-+e!k'GTL 6J"U]SemAX6?M֧ZrhrX3+I61BCS癣iRkh <]cG}pQ/͞AcgLf^81Qi?#ydz=Rho// [ ?=:a<Í#ІFd)o],fgefb6{GK)Efw$5,tAΤ2b"?o={iUTp<\$_~x~&wq~_||r'p|gyO/̟<=ha'8xf8{z-“]O \LWNpsMe]t:M *(8 />2 =ߘ4OǷo^|zpi:SN/G'T39M.O.&K[bMgKgäd0?&P_ gP 6K8 n$=ϯ~gu?c>WapJWyl-p'Abj].s2+O/=.95s]t~1EgP7gPynCGz{>9qcI{7dom7u1Vl!I;d&X_PE`etWPoy0@>)9PUZxZ%Ӟ_W'.!pr A z=:GE8)pϩ& Mcp$ۆ}A<0*Iȃi1\lr$ߑ|G+'\)t[mΒ }0nj6\yHH@! |vLfL$yۉmGWjZht  &m-?Ipޖ>|7*ak궅\;\ZS"nQ>8z")QޥśA0\^FIǪBeVzX菽*ޕwPKqBmOGw^Duh[p0A>1$OPQ ޢё /gÉb 0㚐SMQPp>"4F~`b&,P(.{دI9Q}!^ [/OC>v")DB҅o_ŴJkrN}/X]N!;ƬVeLQ^6N_d ߴ ?}ѫG[xzUHƷc>/dsic0I{ KNg|(Quݴv&OGT]B$V eK{lƛ52*e:fC5B9VNVT.ʥ\8mIpwwlRd9Zܽs7#ͷ!Q,/ܖwRz;je.%%ȋ4riVMfjN<.a;LtB(њ{radD[NA:ѢTUSP=aڜ_kxIlS{+00ݠޤ҄R9$0`&CA5Ƹ"L5',xCV<lTDR,-5H9e(,VTu56dhť֭4|V~UpqtCik S*6a[v0ʵ \S^&/o82"9c%ᱣSA{Ld4&[ qpLESEȥHT`/$S ʣ4ҲP-#U+ԖWfkЌ=t"dXƦ6 E`g|..PQbpD("+F1F! eaaqYf*+Le%D\V,S=` 8JuRsAsFP9\,u%u__>('dx|poT½J7%D4bǵIN$a 982aFDa3 V ՒԊ~ӗzw>oMWeqZݤ(C?oa[8Lnj߃8-[>h&ڕJ)v,a!<5Fb 5[eR^>U +V.[$I߇O6F7'm"c4>\wوX˟xrt@oS:\%WUNr1|{:gÞuή_䷾8;2guiZt- >E/]ٿ.Ϟ??lrA0$D~,w~H'rta#wV5?Ͷ?.N)Ē_?{~݌Ȝp F"4L⊒bDևNbx}b+?PI.\ҵqΌagpn=-΢좧IsnDqB _ɗn!f6W3[rWfi![ܻꟕ/Xٟ.^\j[1\O?"eѩmw7܍86ʭ%c WzeU_܇Pd=yh,at^;:_ӻ {~|M*gU^ǟㄟn*'+y7.LL Q3?HSa>K7-ddi%#K+id啌|>)E..KW1=JИat8yQ!ߓ:U^vheާC]FBt>k3o+f)5x %/$>n6p x$%tEJe˻[Ѧ$-+:"ymRʊ6@$R"W3M_>U0jN_Y^HN_yﯛyQx~w#.v& ޺-42@*I2fZrPһܗE&aqO)8(/p0[NI4'- eB7G}Hzo:՚auC:t4uf-ׁ쩃77mߋBoōjҥNYͣ~!/gń,|IBA c?8]eƻ1S\zyKo^m?Yw)"_ #'b}t8pe-nn-S:[SnSmHupp]cI:k\j-Vأ {4Ǎ*o_ $fO E [*p*S}a\)58C!N:gyt'v [zؕ[C|!>3V^{ 3τw(c֚L+2N8˫9υWP}ۦ\Wk>}5JĨB;uぽwR_A\j0`;|b?. DZ|t^e:>0rʴ% K)AduR>xO?Hy_}=",_6?KSLOW"äzшrZHA"[o1S8ۙ[k :R,ŊI_vV-ӷW/ΎsG3ɩ"^̈dJERNs{2#/ 34 R@B想ȸq9G Fȑ4^QʨƓjPj6Gym0EJ*ń҂#IAJ,DY PGjB : J8eRcGT0KT0+WBѺjQnԣ)!yd@>hB] 3ő , пQE:F愃S1Sby5g?!Y!YpSnAtKjYaYakV!gݪI69oO9T Td΃Ғ̂_EW͉&:Bh= 8:2ta hFZ@Uuy]^~n*jCTf` 2-͢J+lB V4FڨGӧAKs,Ƒ#`E<3u-bVe `Q#(8gB f@+EmoJ< Q,Lr3`T$@ '3n됓!Y>6cZ*OZxAx >4 Xcr@!XiyҍucG;>ݟg\lɚO5dݪ Jr ,H060X,¼t#*1ږ; 6@ebggzʭgzʭgr땶k(" FZ/oL翞?߰l{I$ <_˽|{7?V/ErK["Xs9?~3m 7ާ/p, mm?4ڃm^}D_ucw2R7ӏ }wTxVi)U SwGg!T I=̽ (ȑ;{Yzssj;tyh[ݔ+su:`;Tc*S9=+})YóV) o;`~ӊ.=AiG`n%)?^SȕHPy&ti7sں]+EW5n̰Q~Ӷ Ntv{kWyý'gGSĂM~2?{bgkzhx1H Uֱi/wžrܮ:EŮzcXf }+)!o3uVɠ߅l)D9=1?%YgaIX+,I*Iԏ@]nmOn;qz"APD? )JRP'$ZƏ$~*x"5_]˖"TY x9 $i!"o5>}E&0{>L sC-q0q@2+o^3V'o&ވ|ĉ01HRAC悒A3UD0GĸC 'H*6zLtć Kh1\gVQA_+Ag82M}(~q"#n,>C>SLJrdVPH]]]7Ui.PF帥&TR4DoC &ƴF ք׵x*?i~jӊBCUmB}bqPg>fn7ā߾w~](ʰKY ?Xg[:gC%'X58b_,XGٓv|ϋ2 YwY{; ~G[ãŘ>o}㇏@?;gXF'boĽ/+^⇺Gp9ke˵@X|7ثn?Π#;{nslY[%BnX-{ / ľ7}x= ѻ/ FuR\\sfe7 L}zGBmJ|?{J!p۲ or;߼?N^G$+kKHqk Pz*};ui˷`X̦7o+Iݲ[c+y{ݮEw% WWB;pu'aOꗟ=|/5nqAOӪ|@iŅ&9$Siyjo]R6+PsAPjBO l+ڟPjA5xIǁ 7oz\./R}nU2LMCva=u72m0}k?gWnЁp&{p]hă@2Z(.:1z51{=)k,^\9um.r}˲p:kˢnϏ!?.yvm}Njݖ >iBg93#Fghk-yN-ZηmBhd;"ǒ=;qNgU5zޤ6pնj7<] jkk {:g{cUyngao;z]`L11n=8j0E5wy滻p)\߷~xWrTJ LKxyu,ˏO%Oɻv/}~eʜL[=d8%_}8tPzۇWUuߝurTױk0g1x!;0}tYJݏvɋA'4%4'Ζ2 CyGO~\Pt_o>URw>uAC ڶ_̽)!-PޔnQ3㓗ykqԷ_߽z3b"X5h]q}YG=88;!OҝMj[  1ι#,`KwJt|cMgN&}9/`PE R>&b WRTL65UYe0S„:(7jeиWƙS({=CSy#ǯUUп6@I3}(煣S,I>R)SJ̆6vNRluH>={|~%h]g{ۨna* 5$Sv"2 hq #k{mer_f.NNU=N%`[2z7VXI 9ksBY_=/f-{/CuZ_饜'MENQχU`Nb~rww %͉6/9iS4踴^:=9^ԁs~zx3kOS'ScﳶHk8[teBM4Qn ECwIqoy )5ҌeIvQ}%1b _ŏcEfRsuMQqQ9p\pKOdqX$<Ɂ5Or`͓Ux"AOȣROy`(IsKXT3$.~ϭN~*U}~S] yXӕ2[.@wZNXAq3eEs2FljQ5e86)Tc?(N4VuPsh*glgKaY+S S 8R\F-s]<~N {yZ(#|k|\Vn):?y\O:Wc_dYy^}V P})kY-e8m|WYO1,ɯR+-w"WZ)zFj?Jj#^v?#һm8vaϋ0csl,3:.:>%'MT'/ ҄i fQ&+V"Sh|o4HAI l$\:xZf 3~"e.&ZieHPRJ)T5so wP#R$6:KR)H4j"Ȧ*`-}bDmӂ( 9-,錰γ=gE=jF٭[ M (0 TR MB.\1,tQՔ2Ai㖪$CW\394IAc{hYBN-dp&x3-} 8K5=w0'♿>;v>VTL00gruu.9n7p졣KI0f|g\FAY+P|*xx3Dq>hƊdZ+PNÇa#H! 3N.NA-a,S;|{G_x`6_P-8!DЫNJ.BSF"[Tםf]!= 9q[@;& @ &i2=r,j,M'C0D̒p"D%0S.J$ņ m=0J=OK"洧Ҟpp<{UfW<,2G 2(h4iKmI-`yqqFX&* i#L)q=%WJ#6z\5Eŷ"E`h'^@n\?V΀Fjh&=XXjb:WLMT"$RinVdl͝sicZY+0t Dٞ~ڨ{6RK QrQ-)"V`m(hEL kcfR)6gREH j P B?~gKB4/0l҉sKl9 idyxQ.aLFYC΃OIWDEaʚ%A4V#*﯉AzQqBka@K07&%vfYmnF_KV&uu}K]$Ju|i ) E g/]k[+Q~Fwe fFL8:Ʌy*^ %r'D%y&O6 ʉGY !vkht N@۸gyRs!;Rߢ Td}ұ6'g)n0`y7qlk}O6 9s١`) OVFKyt3aoT6_w/n h!M۱ 76XؘKx1x2O-{Y 0r!F$*@0 p"{;ܶك$L ^VD{4 Cwh()k_1/f1-|+9 n͟`0[1 2/\Rk96GN)6k͜Z#aI-!1EEH, R=jf <$'FBOS9&ګXvY=RXM\X#IzbN043>@''m5Qg(4̵RduОzbY$ :yE˸B29M%!Pǝmh[ ="I{%jKRNbb1$ ?yFb̧yVȄWm}tJ A /] 8 CEln)oIÓ4"'y ^`BBtR'GOVjR]1 k? 0_VKk(< EgcMp`H.n>haCʋ k "Ezy|B`Ic,ᦓ$|ײu"/!IZthi&,I5zŠR}7Xйp%|4$IMILh$lє^9Ab=A2Cq䵽&*'$_ʧ,-җFS'h-(/ObBop09 X};_D h~76UFT@l JD~ ohFAʼnB-t*`eF"*3MZ9y>x!N*g1 @!'w-!%d`UeD,ƈz=J{ g(['Oc"qM,3PJd!9Ō!:`8H͘Vސr=g;+) ުA)хz6E!uh:ygU!DŐBB&K>RE )au Cd>a*BdU +$p٢ڎ FクEub.9( ńB(5o. 8ZA %Rz`A1F6 E+ݦAs"vӁ3V7#M*EQ*1@sHR6A"xFq<Dvu:[ #TdXlrW"略5'xE†)&˹otnHr5$cJLL Kۭõӄ<ʷ+Fs1IFX[Q4l=0(,"BKH9աJufO3ZAAUީh9H<ÇW5u4]ZTz_]WfNP?ݎ4}j.>r0 џLM/iܶ}gw2$,EJ~!Gaotԓ{QX=~;AE"3B%ސFd poonnK/.–; 1qBUE7,Um-`?O~~UÝe9Oػ h`XLԜcE ')NƏ2߽U?~,;/#׸:l~?_/ڦE2l|kjTnnv}] eWan70KiWr66>K;;ZՁO^%U%ӡT2HN ;ӯ^Ͽ}?dwq0!M Ԫ>?uSRAg5\-ک)/uPk{ ~v5}q9g{n!tXKѥTMα^%תiV02QO[C {Nn@7vW c96}:˝oYE+j|w97Fqdu4i+9p228nJi.CZmq2|\}w_{]l Py3 ٨[0Pkױ!Ha c#0joxFOYGrŧSU,XdY"j˸v T|_RJf; Mț\tDUru=,ynЮȾziP.I&tT=SYE pbw 68.FEb[v0;7 un^[+J  ֶ[|EoxoY"7w:ZW{]Kit'׳Q]g羦WIG媻QK?߻|=1t[º~H}>v:ކ*@CvvcGp!h_{砵M_0a;1ޜϾ+% x7KX7>%4LΏ7aΛT*p"Tk`Gn/FO/eWRn/n 5'j!BLcmk);؟qZIwv8et `3V1=oĴP-)\˸?gDqn~YXSYĢbhoX.3c_&xOF+_err>u<&l'c㕰|%2_ W.W-GqD9h!8~iǨ5!zzIWG&3Pj>U= /v}ѧr G3?YQ3GqrU؁ 2Q{7m8N.+wd[V%kj@'{̙̌932g^fƮz2ES+p/%F9&kIc a41 =E %XQl,c @+Amh0N|Th0Vd)DĀ %9hAfSԶluN৛#Q!0^<|EB4E*%LFYt}fY9f.S;) ; * 罟];ɉb~w ȏ>{8Yhvk\m=]HyɄ>ђ)D`@nXb*`+xJm,ؓӄ: ie>ǯ zIA<@P}K6<!\^˯.A4~= Wǫ61~<|KesaFqBm45rVoowߣiWR6g6 &_]}:ԙ'nD ~ĿQ7$ ݻ1T3煭s2#o/֢cSINi-fRZ*MH 6QxV_Z~pRl;;ݻovvq}.ȧEՇ.Zj˗%%#tyJ1$޸7bLPCۿ]-.=irOyA$y'+>|4Ձ=cp9Ù.-ʀK {:&&%4[1B3KUU 8o@Rf&Zh i-7$u:"(kd8, + # *#"' jPvNzgIλ j*bZYA:DJ(TEN&0 AFSHwV$.S/U5|F8&nQ{MT9!%36z޸cl H}:cP;vsRgyqf$lq$їS*+m}Jw6 0SIFF`ẃ\ѝ~$OU(D?dr]1K^{ZqMPa( E$z " ⢌kj M 5! %:Q6~SӌF«mܦm#?]PLXrH9P4XOr0UIh~鋚5nMRa̴n@ N1͢ 4c4$q[f)1AN`Cy H"AI0$CPxmdCoZ$T 2q<,Boa"lr/V< NU sȄ"x9D?3ȃIꍲc@:"{2Plg\ JЅFx hWlFd(m@K'Qlcir7L?dhg6>ʆKz` p&ivѰ>6]먏#͖4CK`*I15LUW,bK+$Ԕ)ӂW`Qe^a a%M:i7-sbB .XdQ@Ϩ㙌!nfvo>LڄTp^O0jtoc۪^cKzu~Ec$: .@g6_ )fU& bv,Ml>-.zf5~O:jVp7}X50 nKлUOzxV5,H- Lh~VEYi`9I*O iAPTEk azgJ-QZ>SQa,! G2b-*SEDEV)uނT0 C=FGh. ?*4RH.͐r<#X$<%e0`phq)Z%yJ+$"I']֫hD)x5iG.fb{zL vB1*OflQ1,y^V^kx{fr!AsD"ilN}0)țt 7 ,̼ Q6yN$TECkExzsxV,4VVՏan>\_c"7Jju84_ {s7m>q?xM}fLa6o "[MJlR=EGS5he8Ay.k1+ bT-]%ih!*BA" iho0Gj3`5}%{vMŬ ߃\ogGG0gB^37fztvn- ;4Π^mW5tjiYK: oT zԒ=V-СxL&79rLqq͓ :g\ 15Cw;}`ԫ:/tyz)1FXƕӄdDh7(\ٞwjN"|ʫJkPa4P` ?{Ҁ<uI_4qϮeW=sbp(y=D+b-ҙBSsMm뮅*,][ u\3b2bPtڧOJ5"%Jy Q,Ч̩}G-.B]AΉttLщL^TѢ>:0A'CZ߇}o*rcK_GcU"jok6@+F6@&Q%3^"!g[ ۞KD*3n[au^ {rQ('3aeTn@uD0]$ɺx =~Fm8>|)տ8EF%K/HKf/V~r2^ t8:mI{ɵɩg pRXV.)HH1 hPtB4z")g%Dq -&>-V"F6CzPz'o3eOfҋ}olaڄr]=7Mp1J6:T؜g4zh812 M*Mx͉fu$$2^:bU4d%WA­[뫰X]_{AcNAָ=8rz*#'#W"S_8CBm =S}*|k˒晠V>O>4Pa[4Qk0$ FftƳy><a_v]61p Cs14T " F;i$g)hVYaU LjN:"\ʝJZÅ,,uhc"&PB#ذU9[%uJy1`Tt:%Ia`1o5<˚0YGM ܀.*DHmU_ ;WKlwHX[gQ7۸M]=hƜ.CF~ [slxq[vo=z|a65adه4LLzs/Y-գ+ދSlKFih5_K=(;ٷِБ/WkyGW$f>0SSaoM@Jh>e sUxx/_P[FGR[ot$ZW }'uWe5X}_}bJ]]{ aQh YĨ }8739honp0Y{? &@ M6! (9~ÝFfz=ePzg0ϮZ)TȠ$ưE1R1``ZҒ2mM8ck9|_uyF-׮Մ":.\D06q.hK%Nz4=4]AOAM5&nk)ȁ>H UQ{ poo qT?Ӡ`JĞLO?."%3v+v7?O~C`> {ʸ^[g] puf(bӧ:(' $;,WerY̚D=>4hF.d󉻻k`\f2k3%P)Odcɻv榃ɼ PX q8_S Z;Q ǜ$]D8E_Dږ,"B*Ql_D֒9/[$W1eFs`LHkΫCryl(H#wi6Zfj1t`kM,("T!eM,6iJu’(a j+5M`I}ҾaVB%ʔQS.xY({ s#>Y-gMrV";sX1WОw/8Ҥ El=u_ZJw(SE\R5tv4.ShW]|W"]U ]<횔PGc$h9b>3)/p \$̓g z*ſA[hz1#bIue BJ08,L) c9 \;M}\|Θ̼R=)Z)qW(aa/0`5#HPÉfEnaz`z+Jsjs٘1{銨}8eRG.dI'Ea0x5鍧t>.,yE KC0:k` 0Axn}h I'sD{- `l1%?9?M@ri`pP{vǍ#$5=85[b51 9xQmn9ʋU*}9 ۀOUȴE2gz8_!r`)bOv؈KJJCRER&gxhm4]UuuUuu._.03ޞ:cq;ݛp ~V_x-Y0>]y 'i4!XgF= arjۙG/&Ԟ@ t?M#AT-g m皶|yuX6MΪGk%{b 2_ }4]SEǻY<QL&CT2vT"cQFFƆZ0M5TpA֎"aY@3agmfo"A^ :r>XDk8M`kly-よ~S;%U;ƒt7r cϲR{; ܲӽxVTQ4{;/jMX~ӛղcbc8Osps~տB߹1wt߹ch'іA5טѕN`QXfu/Mf1]Nb(OxdB*y a>lk@~j\(\`i># TilyYpT]JTޣԒKK]JQ\ ti&V͉ z~ҿM>벸 -OÿvG&'C.w=.AΈljؒ 1S& 9d20c6x^@1Y\dM&LG@^#G§RCLF2 "u ƴr9J 88 bQG!sf2TՎG:0˲Er+` UzC\CMVjsKp#Cm /T $ 5)NQq6ΔM7 ՊӄNIjiݸP[ߝ|ư]$M>55WbSe&EC (5= 4,1Oc_H:TH6 kϜF<[b`G_WxaaQ$,iN/E-b}yPZ+%ϐLo6V05#Y/^?[%3~ k%ѬηzwҰ)9ʊ.t!B:qA+&Q C# m+eF T!晃F("N#8xU _Q%کuTILa=Z*_> 4]5޾N֌n _oo˲D2'@.֔uV6gqIZǝOrfK/l)eL0xx_auT?U$ӻ1*(1%{U}A9PN(:~A~u4ܝݍׯ^4p2}>1dA(3v9$`:]R"Q ?2b{t&s;i HgxlY4Bm鍀^4=ګyסߞ*cGpg!eMޭ@JMp!m 1ۻG#,~Ƴ?~ nYypɋ\.\Oadx54p, _Rv]’qYVYR͒;0%L&`.Xe UJn- Kfz&5f溹9MeE. -auy=*bWQG6!UOSQl ǞVApsL7C:!lE nьޝl9]Ck_&d[hFCR8u_9G\Ǡ5!gm®^-c=:=:l 6ܲŲxad|A%mہWA 9*:w%r_*tDvA.Fօd}&T2EYK avgV{;f6ALI[ug~ H7 mB^XθsCU%G׊ER))طY+(-_Q y螑/jP8<[۸ kUBm 0軚+]`0j q)6;ݰ/Sݞq Z ͎v8oR\Rd_."/C ]mTW"wDtlvWQ9(抜i G3^L;۠M_( &t^h+cՁ=a7cN‰v[o/kldɚ}HyRٴ{ 9XrW[_6)7xW)ho=FO}Ѥ D?[gD$wqK)?"nrt&pȪ>gTJW"D1Efc"ۤf 6n Q{|~}.ݝ۬~EKIa~|HZ~C/ }~v>hhCGD.Vz>zVޗx@xK+j?y%)bL'+$L~A>-a"Y-VT\\om7K9҇)2;T;=R: R:N[&8 E}CdVMVSq HZU=/:.0ݾ9E7 Rld$ԃdE&RZ[p4 EBl5aF7(rGE@G e(H/QًtEW:**}D 7H 9(ĐTh$X* c* JE16boAOQf(܃j| T(e皡# Ƽӝď&f[>TG4}G i ,ƾX ,vd׿}*:U$êɲTd! ϒT M88BE}:zBW^W[_zE@ReF\Zx5:`S!Y" A0J)> ۇjjo/3 Ǭ;r.D.)D .b b*4 zg=N%UF""EWyg;@_i9E9M,Rۛ`FԻ>O=?l\^xٌ_ Я2jj>'E{fz9-S,=>VZ6JN~e!@O8騏p NYn)giHpRh(,MNR+KhgFs#o#.P! VKdZiΨs ;5mg#1ށR[,mňSdq\Y(eڥp%22r3ASaSх *2P/6*+8sЕFKMl1d]lT4_7=Р(?f25ޝwH A7i6*k8y)(Pݿ({iL}IHɚ]L٨-%1k 6(ApNa=;w8/F7Ge:6qps5==foln";1-zsYmAE(afV/&*:oKCvLsG9 Μ^:eoΜ/:@HWP]D"No_#ӣ KN ^Ԗ?6;veЛ@EӊPPHi]*axJo&yRI!i +[JD&͉qXsJ8ьϨ!ܣ#\fT&<:8^ʥ􏾓pûo+؉Ƴ?99m'Hb։JpH9`x` TmbhCɕ:<5۽ OՊ[§xϻl;N1o8o!]JNd$/Ʊ8m,3Y_K7U`$SQ)B^j Uk!3P.#`aAFp "9R>^G|ۭm1Pl2e܂gf9\^kVN?,Ҹw*r}jΞF]뫆dWyn2rE(J&]PQЋSgl4D?KUWF3zPYaZwl[iJB;7ksec1]z&5>ac TgWɖѼu`&-H'"/Rs&=6}9k+Lc%NIH)|{_h5VU(=c0)B}ul6)#RA4}!W3ODmJ,(^.2)Nzjdl޺wMFYڏhuls4TuY5Sٸ_?:s?8xqLhf_alx!hdhug;A"& aE0ʮ.{[P6LFdm0o7&)]掃j<ʅVn<"r!\8T{:p ^Dc=q9MeZ&7hn:pM1NB/k-ʗiq'|9,♚wW_ҟbֽ/ø^x-WEQ~;WZ̒=#u [6wp]*0z2z73 ]f٭ wc4,vFx(?E=ME룞.<~ه oGa2 QkZ>{-Zjف 8E !8L8ꑴ&0@if:4^KN &H|YcXBYtɂr;cy7W⠹Bj1'k{,|Re"sOr&EDpcTX3d)Y,FE#+f bRhbn?f<ڪIM9WpkѵnSlx"eR0%9o$ZG/L}nuD/2 ߛLM8'0 vj.+_߾ ,eEtM%Z$o>6շ@ׇPADJJ?ME#|fD+JZ %'7MHR1CO:/0ݧ6Om|f:@VW :j44x_OU{bpJS({Qn;z$TZ _տvcrտyMFKU *&kDo@{qh;?|†l۷{Ѵwv7#O {ȭZjqR\((M#LV$S ,x[-{wEܘ)}7;兤yqNyWڽShz^/Ȼd*SJ/Ν10J?fDsVXs2B:HߙA >YY^i@ :El"Kdp"TF?K&@R0)ő^c8KP,`8+ *iHW"Rr8艹.4΍YpWRp8]J)Nm)2>\˝[t 46܎?X-,\vJZ< SA# rʌ*%\߆UW o~zH5?b46U8O#^Ƈr_p*XބWSyak& aE0])vۡͼb s~ūG6 $ΙiVy *p&,a׉u1մpQQ5gcWg%1$0%:+(93 B8اF1'#!T\#i41ٻq$no2WUں%5~ٜV,Iv[_C%Z%P HjI$(ѿF?t7K㬸x &J )Rh8^*Y&sjNLe Tf*Tax6>"'Uѱ5T\ W(RT`4.VzC2 TS$[9 6zDY,G.K-+t5})U1[ g@@\VE|c[:ť!NeÊlw&rf2ia }͜yRfP ө-uҔpls"]aΌfr mJD ,'fD'(&IJbS "\Raԑa" Z<o&9d|3]1{*^ZxuvO·|9 Ng ˱<}#P&\c6a(AXZF),QH o +P0:sRq!R!BȜ,W2ۍ)%Un)f)a PnqLA9`h`x"ig;0g\e3*yf4aǨX/@ΝV><[_;rҽ`'* vQF^ƈBVxeLmB[->!M#V\, ɰ8;X$D|P~vNE{~A!& Xf*۟ DԜjč'q[aپ<*X&r23of9&Pg7:{t{W!FrGMi,|N$4s\8f^aG !;@|uM% _ᄐŘcҌw\KCй kLMUY=bw d'wjhMyA'=h$\3Q)LIEX?uGcn/v|bD U:b9#S1NhLQZ.B+Zz9!<9@RQRoT32=2z8.#p@T,4+me{4y> )1!:FX+Fk L(!eOJB(69 1O<)G3pc!qjPlW 3EF܊ZxdaӪ=K2n,I$@Q4Q2&ک(;jSe}A  B)yALݭru%cAr/;e1~,Ciژ C(hBWp}ʵHaKͨ:6bJ{f'_Ƙ\hyt>oESU4Wa8S,^K,n1kP~I+(Mc1eĨ*q1|vESDa-8\&(℡هl mo?JLeMb DAwt) 1[,SaE$\::kTtvrw"!Q@KVwIlemjgI2RNj~ܪK͕ZO5pH4 jL-D㌃tzsqtϽ)y049 4PÚq' ΢rzcm M<dOt80-{\ .u !f8n6h0qcz]f.Q2s_%_ 9\2FYj; Ogjɛ/faf& z>M,q<__Ft>\FpodxGG܂ϒV~:8yLfs~:dz+ )Pb&7~L#xJ&X(I9N8U9; UqCBC:7vJa(NĊ.3pJY X}"[|k9Pl]$nA7NC1& ْY9QotMϾ'(5_ݳSG/t'm8K`/ ݔ܆h4 -սR|7r]-k B#-!5{ZI T"4`uDbaN.DW\Q]ҐOMJuj@;Դ֥Ɏ-C9GGq=v kؼKktQV*ڒ z,"DrK9q3[9pe޾3 5s {=<ͬv2ORu[TAyhMF+}plT|/{lfI {ٹ}:j) jD] +9 tAAG;IW"~̀>^L梷z9??]^}߿{^d0̾ѳއx<]}LDoˤ@~낯zUk-c{+88'Dz:XwQā/nL5w Ȣfy&ܶvKq8'dkqLUQ/P!ak($hlJDX4ӹqhu"d*A@,s?qg $J4aJ(QRhD)\lɃBΚ|u (^fPĖiTNTSP\%$Y  ErSTL)F庀4&,L?Յ (!SNQ,KR ƈ'qB.D"UȨ՜yP RX~ 0՜-\;J]&@(-`^<v.˧{wL\|x雯x_blK(S\y9)+_U :&Ch9Krt}tiC;|>Lv_A>2-9 `zSyxHbE6G0O2>s8/h>z}K ƒ|%{ƢZS!mހ"͊RTB7o/ӼkNU|M@ P6mkMHiQ\fX7.vo#qWHvG_?w'1%eM=jM )tpU`;C ~lO5 c3Z?D_ݙu7^[7Gw௷;P"[P B $Q[RHoZ׆hm,ϲ̤'"U xa@[L֐ U5oy"u.uQpڙM}2A]`Avr4qD.-A}_G9Bz1Їvv]Ӵc1RAmP[ǺՄ}$^4*^N[/%/#1mBR3E=}L}x!4nVc;> gusGyY`'Cn$I#/Fɪe~fg0x&EN$Ks$t')bWE*aE\b" –[L;Af+c dmR:Jo;O,uC.\Vm'^Tl/4#2]$wn٥J:Gӯ82;KSl%~"5J|!T\^{f ʥʹT)Wb$< |1*qj#>3#rr7jfG|-lp!e8=.F2.hޭ`X^]?~ȾͧXqM-wxM|o? o4x዗_5{4'5U4̴Hf׼QL}If(|;\3>2=կm)53HfklLk0f$ECevw}#T_,\^`$%)qo +^sSɪJ0 EtՕ/+RȲ&EZ SdGGּf1u5z2^mwBN8ɻƳayCo4xb?t5G-Tײ:?4/v,X6Xn_\6yx{ނƑ^PČR)IgQڢADT٫ !I&̴AÝN4K{=dj.ZO^7ptrz;H0S̑,Q S1>;$OwX}Re]\}ps.Đ1S;C٢H9(*BH"xO%yPY$}su}pBsL\|1НZ`6}"k/$KF 4"S1.Fe1 Q@RZ\A}O* E`r$Tb!_lf>gK "PlpĖ?{J? c2 QZqU _4AMVv B/BAu ,/ɸZxƴAj[1$b4KDi Nrp&*:RbBtPL QIm3 D`Rbx_,B8R(|S=9y&0MdzA $%kaMre]"5 9طIBb,̡7=L?mmkS2FVvF1=%CXhE!%20-Zjb<#'SaR,e=S"h1Zv~UER]Bdow7: uKf+,#i$,AGˊb"+})<*Vu][uMu==NS4ƹ-9ܟ?:‡5}Jo]Mozzk= Q>=e|(j}L UfHBxC򲚭m2H(6Kw1ҳ5Pj1I+ Ƴp<;sT VsܠClj"߃d+_Z(bxV=o={d83nP Tq0 e.96: 7 Y "Q%uF)Vk{s)'9xGĒ HLl,dcZeVm϶^h*;  Cʜ,r^TPbn&C=)êZ2aBfe a]tLX)DKsUSHP4RP[@ {g5W9IcqԾ5$si慬Bj)(I'4˿CF, S{\*ϐP]ӀWRtCmGIRKAZRtS*QNv.1f{ףK` f*)CU ]YZ١JTP]ZC!Z65qsy|qǗ$hlh%\v 2#;z8:2Ì֔`Y1Xc4e$; 3`9HVb hӀcp5i`>`AwWw|)5C]֎3ԇcݖNT,r;Z:gw ̱z /Ղ#N۲ P}Wc;`kY;e /eM⠄w :r{lIz< x56ǁB*!eU)o8RPpĊCUZ!7γ| ]\kUMW <¸^ҔBۼDҝ-ɿO)/ t ?;cEHQ(}SOQ$Ro"ljɮ!TQuѤ&u'$͜$;'Wx(tSٝK9rbSqNvfQL=~Gq_3s ftsSo]7leT]d5Nr2u^h5˜P8*Ȼyfb IQPqlv>cU4]aV;IOEsnmjN%q43|oSFgPwmވ *dTRdWW_(#+hl ˳Db $%c̍<ٿ:d}bI*/aHB)9SaKhRn 0e9'뀱󧐿{s7恹5Տ~{i~wR>ojC@DO Y+2 (0"ZNeV fz)KxJ*ޟk2PhWeԞhm =&̱GھoylcdO?@l,<[wzkMh`2Wy??\[&v+ [f>qwQqc@gG>Ǖsc qrnoS1ۛ,®~7Y}џߥ~="ur*}QX-<9V*HPhAs;q[zԻt w=~=6WTFݴ(k&Z>چ8 UgtQ @|| @[)gc?K;Kn'[c ὐ7Y=8cs U߻!o){qƆ^=C dS[34'$NϘ,Xʃxсr#n|ƁrQp;Zmw4E4E`F2GU}c\pߜX]q\։$:6Kt5zv {h-?D])$ڻ5Sf_ XRn<7|Heo@ _hͬy"๓15r=C憏27 I,0W^k@ Z͘&~DfEg@LJVg"vOc7$:<,ٷiB0<,ٓ51Dkko R =M4[BqD=8gуYs1fYE #'Br,}qBpsCgS'~2OԌv5zva(t4` ;bHJ6'iǬ->\om>Gw^ښdXm ] F&=$1)9:Hg*TR+L2F :ivrTm;*9Z狯8qV,2&%mJƨb1Dc~ː$o%5UwY)G3rjwUկe7>÷"cqr yy!_ad:hEDR 2jV )CsI &Ո,NȒ( uD &k"x:l›B^3 o.VhR^]}%/8T\jcCFf83-ꌔB*4a A)ͱhTс !2V&Wױ8i/PIŻ!JeDmheh4GKe@%kx/zg0rtiQY?_R0^  !`BMVNae6"(EIg >@^\kIQ8B.(1xۡ1`N]koɱ+\$~,׈˜hSlHT=I->Uu:eI 6Ya.aJ|^rTÔL5j݂h K"R`U&WPz\QQvJBg\l.diF9UJ$IW c^QdKK BL['x*5װ@JuI5xb XߍWl/^stEw$3\hj.~ ͦ{m<]y?Dw0vقMܗ<}fW|}.m<ǰYN 0既^!~yЪ׎ үbc_Oo!uޔ9D hW̴Q*Q6.!jQnoG=PT*EE:m6]Òm[Q%QS (򮂭za*Q]6.Yk nGcTGM٦ZǨzż"Q=k(hl嬀'Vm$Pfت~ Xyܶެ@7mުTTcRD`Q_z3^2vVj;&h;{LEĚC*ԯWTnݻhϟUIQ6ݎf+ xqL?RΑؑL1?M[{f259$x$/v>~w@~e^W"ͷ")Q)rh{qA3Ff"/hkU=jXa0iB&fJ:|*Ʃ,]|TB31>Q|DϦN~L b60x```: CXRCbe~ p?XM8~yD a =:eKw!#1_fj$ZKX_Ga¿9V^P7/W+wF+7~^ꌿ\LƮX Z?8<O0,±&)ٌ1E# 8Q[$iCvz'FDbNxGry q5z3;ei, qTEW#Cq1`-m\sK7qZpJcfx1q -q<e7s;/+~r]Q4B.//jHT:qwbç 6CL 9)0N2m̭q6,ņqj.?xOdb]WZy\w|f_"N7X Wc`]CcC5*|/_RUYH7n\@vH 2TNxGn j^6Tc9lS1;JEFL1M7N4BSJY;pjx2i)41 SBIHuM}O"$sI<5Oi vuq: XMUA8s̊j&Lr rSZYÝ 5+],;(9#ЃYJbt1Z8AkUE5A %"J'a zMQ(H]"N6a"XLa8+ SʩTs'%(ɰc=qT8Fv  v0\TŰK{8OЕUoybgl(}yF8h0niըZH:P} \Ѓ18Z!tN8́ "9@ ŗRc]^N9PE qunEpw1_ʱ07W9sEȬ5*|Vn-ݰo߂Y kF@DM#G u4Ax_2? z;z ߈X1 W!<\]W!"[˱i4~]!^ӭYlF]Z`2Lx(˂Nd mwJpH4Be [aL|kj3咗pJ&miEt[}6 Õf1*QJwg.h)\,E,o5SRl9Y)xG0nP`A&EWUm]]zyehgE[ qQ)'W-Te,|$B^]90LNēj ,qoQJ*)cMvj2,%=\ +':N: ;n tff8t]8Wf B0=M)1Ybx$eK*-F 8֘(5@ y|ƑqPCݧ .9 dQ( *5_45EwjiF4|&>QD+| To9 ⪕͉~G. H00nLxI[vѥBVǷ@G= B0b,>"MjI<]gQGS8F&)m86 'P;A 3kÉ-)$ҿT;ю5 Oj$Y5TRўh✇KV;F"-]\T!?L!?áE5uՔa1Z8r􈢚l*,EQE5mYQ"BK'Z=kH#I5ǜ5Vch=GUKWLGrF 3Cc3cu&c3KTj)e1,̈́Ă+Avʁ7QSuUG RWf[ȼqν6^mJǣyl^3.-l=zj>fW/^Tvմ4cP̡4p3!.cF(QLXd۵*c-wl 2%rی~U@>q2ݷAe:oIq)kkDlX‹/췻OUB 'H5] fkJԵl|WnҾCa>}g|n5eh%[dŋ']u\T&M8[E*)E 8NfNdבIǤD&>6[k _yŦh`OaǓ?.MS(eJDLh2"xP+.z!/(ڕ:0Qaؗ=(*mSQ&$0P_,B $O`֗S Q>~dK0NxԠ{PYt 3Ih~ wNx:^V@.8j7F'5a/p20_sd揆JRDfKI;ԯU˖7Yôdv~/}S->ͷKCӗ#e}aCx A*m[Oa߾;`uس!s? kJ/fpv:|AwrcoJEslv, >=ˆj^H윯o6C&fySZw5a9RV B2zNx^Ozignji)d rݱҐUZoL@aFc kO&?.dиQkWx9oB"|J)dfsc S2!Q ,`h MHfe?M%? S4Np3[,xyG,AlSyƇ \WQ\jK [Z9Ei+06s9BD?ydvB~1~8u9;w;,.^|b~g?fs'͖uKQj7I:ΖN $AW/4e: #5=<׹ٯŲ7z )6!|X[!uyYIԬ޵D>ouDեDr]-'>Qݯ9qS" [эVש2ѭ; ZtVSq{Q=/̢EG}cwnkz<*eQQ1#(`dDHdBEsx ~>B㊁T~PY`MI #9!/x}E$-^:<>=>,YV}zF~*a.5쫣+adv|{nN|zY8ȗ|YnpG}zò1%S2eޔjմLQ=yօx޻OygZw8;*{F+wIxqT_OeV u=,;M!o+} ֨2z 6m+чsL/d֞isI0f"S(9I=wA2mK6ezvjųbX+ٷ$`>Yh4U`RمXR (CR5urk˚\Fs7ex3CWȕK9G֫ÃML<>;* x9}vѴIhڬ]X=v˰ [jh$ 򨒊G*鳚)LO&]!T6-BXq-|4~dc3 |Ttrmy>}n0O]1W:/h[|&a ([&lr2I=U=qS2 |`ֵ옾_3IVuc~%V3jnϪ;*_|rMׄ,/h*qcJk o0MըeɞdFwl,vcQ>!iAZbjܥӔjDR5mJHcD1۩Q^h?cj֗aEFJ,bƖHzQy~QQ&+0,2kJ)םB-O5uW}Sbqpc~83匟=#> ]\* ,n65z0lvwGAQ5)E).W%wLcRT2$}Ⱥ;R,$ps.X~:5Vg[ck龶YȦJj N\I? C*E D,<`)ޒ~?si]k]JW߰Ņ0 ՚s'`0B:&J_g0`(k$-?~C\Uj8$g М'bVeS풔(l 0u@0}@Y\&\./22ɀL=6u8¦/חC-җ:9[nC֘UspE4 yq!OOSC5Kw"1(M m!ly>}ה:.@mbp͌|X*{u793(#xxO[a]w*AĵRuj]*T;lor0}?A;P}}}78X*3ሀ*7Tdӝ_۞ڻ}owi0o? _޾j[4 x4A`p|K_mdNGBi 2 tB'\L!GH@Km2&qDV1[Z/.Ck%A$9SXuOXc =Ih]F:#q B &.1!Ӯ* oisUPqp`Qȹfi#JF LY`#t`##y g>6Hƒ+hs-1LqvmWuRTfYA'[dˣe &&srFnso+Fݖ#|0[ܚL^y($3P޷&2nh_Ɋj$j{dZ ݂K]:xNiǻ3uB$fy܂pA/;ր {^;ƌ z`I~#wܗ">#QGDVSOPQW {h<~bt|L}M:}#Y}0 Dʾ<$R͜ YBWeئ#kl1%|&4d0̢ ttwB&Hms0D`Șc*q&L`ڜ1H<D`) Kbi4ЀF P,CQY;*NS:\58O48HgMqAGEާ_ԋle&tѺ˽\{{_n?ꎌB'%qb#~w3y&Mb  «*oX흚O3кn9\、oAv]&ֻL7H.,'N.jŢ5WSGYNhp^4}E+FbuFO9  ӀSXEn܏X~1ʭ]\km$^`;/fh솦?tL{qn tWIs&W?wK Fƽt96Yۏv4m^lRp׉DlN;[[7_M7^q/[}uv %, ݬf;I89-g{ttzr٧^zG~̽yz/>{q׋O}}:ͮdp0\ q~~S~s+P*[32C5&:J;?ڃ=6VvNAxTO5u{z; <!2)pY)^[>EQs̒.v?gn |AP.U'?_;/_ wӉp%ك(n_ޠ|e }>]Ih䭺1)F9Ό}l#lWe AQ ׮J60&͍x8ƓwBt3^FV ӭv_Ӝ"{T'n.{$DP퓖 3T6_>JpqYHxm(PާMaZ%\ VWoBr! M804'aVI*Ѣ1P:pSN{EC\>]tae NrNӬV:Vr.>=90 M3WRM z"LTi %Iq[=tԚ+神@'^u4KQIφE<,F=bNۢV+6M1]8߰tSf*ڛ$6^#94=εӸ3j\ sOލl/g ݣPfRB|]?.|͹ѽuЎ/V66ćkC"gƜ@ PDģ/[:xl̄q]#*(B3sBM3[SXg׻ٹic?oEm$71y 1PþΣ\l :-U(hʟރ%aEo-vS/uUj oqi g֨3I#8]E5aNwX/C&EhVt,ػ$ 0p41b."R0UF·c}|2T3T<7`27s)f~ FvT'01[H9C5tx{Z%VX_ZAp+b_gj6_aU !R[ޤss.`^lVdQ+R&[J> @ТJd f#FcYRփ!W(0SKL!댩kmL/ L~p,6IA* , :(I1r,TR-T FAAr'_bԹQ mӊz#5Ʈr6z]qJr鰬Pey-) }@@TaYA<\a )\ly u\LjVc`1,l&$/Q[i±U*-̜W2qq8;1`UZpxw wBuS ex\0z懟6LU1WaY_^^^|/tP.akl{ N+X lUQ1 "(˟y.+CD:ʑtU p U2!R)Wp%խZ)p FE-$@%^< ,\ 1boN TbAHUQ8*He#>-AіZm `g5\I&a $K4SgBXd!&M۽]/VURYP$!2@[iD(տaU jRK*Mȶ$ ~[5Ft)Վj83/0O^mL=dY_/oF=6iOYiN(pHaO?ynJͺrl3^ e%㯠!`c>빿,2O2NB"lᗱKc5>جGO4Nk<|r^Cצ/FxJC҇ AJRjrIYbYs `JW8,vkgЂrQ緞aaaϊLUQ(*_1ʕ,^ȇWW,h3mBsնOY2( M(;yCf`ٰ%IXtRC\`!;4.FՖXi;Mb qָ+aIVyd ]r*עۑX 7eb4J$9OP 2J 2yqSXVȇ\hJQJ$9YcB\ցMւף\-֢qA km-4馕%5I[sM1-8.-xfWk°,,.z|g7i a֞KPVטN3- 7쏏(j2%eD0(#2ѳJ'KSj:L5NhTץ8|%OKKETp 2%#}1ȟ_&b *[2$+bJLbrAe&* TM)+%ô{Lͳ[1L s=)RJ܌V ANa ch ﲓZ55ce"rgy@i1gӆcH|=iB "F0wQ0oyƐ!W $2Ifjɇ]DB0]c SPAao&zPc&=JA/ 7QG@j!-FV4Y?e%˴޽%aJc_fQktfm0reK4:_4#6M<K'O҉ZHJhYҤo_M^R!d}v7hq.vV^@wᓥ29K& P#"TCs>O%_㻷n6*ǛfvoF#}60*Lf|3% 74B^7Je?0k(IA Άb! R ʊH gH QahK /` Vq1M]7m8yc& ?_S`1m*k,N$&1Qk^ØJz9JSkyE4b bres;pzN׸N >} 9;F((k װZT_F߅)OPއ |M>&UXœ%f˚Ç1R؟6Oǀ~|^e`8CC:P>C٫>֒ bԂ6  B2ԪS㺩UB08%-ۇ^M6 Ixl7[&5trC ljfrmuk'.d5mWD< q|A3#znn'G_l@˒whű|a N`wB-9mnϚζkCTج|$yLU4I jjݤS#ZT bT'mALfaDZU4I$:[~>Jy":hdϝ}'  Yj} ri8VxNAk5'bJ<$rN(r cQeAPRi["rlNq2 #LeQDcL}²`$!EY:(kBe΄aB3jc 9^6ږ7*NlP f8;PעEpJ# Ht'j_ٟ?F/<h$u}.ߺٻ͙g m`lAl}Cxh3~0cC޶DM)#6?^Gۦ,Sb{[R%}Ԕͮ&f2On<%EXN3 dBwL>ҊB:VN;`]J3"O*SMsDUkz0433f<(90ҐK TXDҊ{"^MLTJ:B-5y #T$TuԔtc*d ę(m%&]w^(ݒ Yg.Ap_Qk>j2Mg7_/{yFp\<&/r_o.^DM^"L^Lpڕ!Uv4TÅ#غ ͚ ,mU%hI `k^nL*WϾH<卮y&W;icm*rj.Td=/q9)q2t@)qwmJh==Ǵqh&R9iZgry;&AQ"K('俿 h]lR$%q۩% ..2@0'}r0v&$MɥI>7CpnCe~_-*Bpr T $W<Ѓ5 d*~RKW>25.[\ wLbl@3FNg#8C"0NfH@k:Y"3a=>.V2vaf8ilX0 c0Ȅ&BDi&**[ 뼉J,,yw+-(j3ƬSD3>WRL1y<9x9;)1/]BvEj j2Q;;`tD08G.#`db']*>.0d$?]$LeI:E4uv7'^ܓs}Y+ow.+S}58]my{wjoյjbxyyfS0ցjw}Rjxqy\(qP~ ״,8d*}ukڋr|r9v F;cHڕ$'+4~Ԋe.b[DVtKo]ɸK34+Qa4iZNߺȦ =N$D1.0t7+݋>?>Jqp $]g~ZiS״|e稓PL=ƗwizmC" DvΛ''#MD Wtgw/G2fX7)M-\)m-%^ݨXڢFwN|Wo,-X o~*=SUޮĠ+Owo"\on(+=,پkXh|ێF ژMU{L雬"*7D;]i96sR %O8S$X蓈DYE @s&*f#na!;0Ymqr?DdzCe9~ Ya$8zuWezcI7 Aϛh =҉w?]q4Mp|^ \Ɓ0L0:"UbkojSM'܉'7O $_0κٟg`=?d/JͳG/]/]6:n=;{˧/n~NxOWًΟ>_6Uҝ~z>]'7{ oz^NæA|כ7yzO ٝ-qпW٢X7/9E\:4O̚DJ!nBN)JdN͹_J/Gc%ATĻq5{AV;\\\O1ɱ bak9'3iqKq7aC׹/ U7LH9?9 {mٷ~i};Y-#>r'<`B6yŰ߳_?@KS՛| ܜLz`(~yuwЛgF.xp7$i87# K0N2ȅJB\f/VNk,Oxx5Jl*۟A4iYp\|$ҍM?q3?e_+3N:wsOVI,]ْgo|1\ 76PkҎ^ q;ˮRcu>332vB9(H- JoR-V5[1 ^s5A@FNj3 v`1QۃTf ]Ox{3Sz}т{ %gܪpsAks}-v\]"dh-hmBRjrUw`@ P]&\Tt>lHfhB Kn]ҜOka\=;愿/&/4+7$] Z^Be<8.ʆrB6sѤާ1$E(kGH Bm:=XFu4oUNYt>dቍ/a-m]6k.aX $վ&IXT"GQ(6/z\ǦBQI[0Vl?6:UJk=8P@֩rB#XpraDsnYqIsz2!RbYƃPx{~LVޢIBc!UB+-VzIRXq"su%[{+_aoiNb!hMޤyaR$ p[ƣn\}ͯyΏg^UT5bтnII9vƵ-y/H˿ \0= eW?PXnݼJ#.8RJj3*XkD&J&i,,*ơeQ.d+.{5?Xh0X]̏59F2D)S0`qh+lcP'"lo$6\ Ls(&HqLp\ab!BZPD1!$[" tj2f?|`h-ū{Gk]m>Z TA`ǃEyO- ִ1Q_6PaޛJ.`/ͨg5[y{I+n$a]|LjeaN+g2Ķ65wך 1%J|* om[[ Jh(D)Xɶ%OPҍ/\;R]ANZ36[e(l+EP0})fui{99W5;*tgQW DWeϢfd4uD)VM*-Ш`=JnnOKl͏QM<~/@m'Dœ6~k>hk9J.K oUhsumWXЈ#%h-b3rqF=ȹ9/"S)+1DJ"3u\KjY;.#CEIJ#1¥"6Tk-n QkJӆawW8 \^ qY<Ē cKց,X02D1!A+0^e vPo]cʘF2 Jcl",g2c)WY7mokmΕ(*9*#BɅ#ؒX(k! /h(#Q֏9ٚ+@>45S8׷lܧs4βSݙKwJ^(2OؑJpg?_?̨8O.5 -!#j~uq2E"۠ZG\Sb)( N$g9z F˸| X@ԟs_S[Ue4X9T!,%qV){WU#GQsysu76=7i,)i; CXplBKXcavвi&4 1("H9e̵a+`N0x%#8ոpeۋ(<@@ (RD5\+DF k緮0j!2.41 zVYU9随!gn`8Ʉ1‘ڽۼs`Inͷce-o~6(nٖu79$Iں^x|EnEJA di(c|m*-@q bQ nF0DqBF30T9 X$֔D߬Wy8$!ZKqǁ_qAW|gs;}pU@d>BTUf\Q9cmn(.L:kUC jX$6s7r%Wp` )Ws41EzcA1hUwUȇ؅lp!EC٪[NJ1K>[Z%z* 2V=o *"K zlW=#v!t/3+fXjLfM9 U;g{# rkffͅvp6&Cn]yȦ&E2kWux F2"SvudoNVE3OIOաP1֍'gORLH|?ة-MC5O_I!D8#,񛃁XY+ysw2M̻D%)S) ~hii+ EsV,P0c1Y6M`['_z$H @ƘYl kxscq[onztJ"6}`NsEe* 0L8&Ugʚ7_Qm" uLxX/=XeR7TS]` `VS 2++3*?##h;}}gߣ습u@ kIc.qwE[}L#(nVO:%M̄`cz䬾F貸mYΞM[SqŻ,žxV8I%: L+U>ib7%\_JMe|iwˇ1r=86[۰1 uÚ3K*c02E] ԎîC1c V#3ǘ2]JZVq] m,n9k6 br΢d%X+QiQ ql03ERel&2 }*Oђ!)\iƻXiƻjڧ2JgJ8 @bDQR -\bAaq5VZ%)}PV܇o؇N4c}N7ٜ,4rRPQGK%%3D0=RxJeiz +%/E%ކէq5Zg1UJ*fiEp>J9Fפ#DvΉR ߄7sF0+Y]"[i0 D) o#G6i0ퟺZq X#`VYX~hE|xzmʄQ,o9 KJ%%KR@űm.]ц3乗/ڬ&'&ɽ*}u -!uPiV2a_!ylR,G$#!~밐[\JT`bُ:ἓXڜRAPm}H׽d9v=?Pf]@ӖjW3V>uӽmq ]G% |{- wD- B.`.,\ghYO)al! M5ZrM%0k~(95F9f. ǫǩ_"{ ) >x?kɸUp0[}u~Օ8İeOӅ7ChsUVJczjk HDue) L,Ƽ@ڲ/V׬o_˿o?|ʙ'c'?搚^})'׾ss" ?cv5[M ZAα@g`fsEPFPޡ5gHq.&HMz_æԚUNAlػK@j -6]t.~]KtgaD:6}w={rˮq/wW>G cznr~ymc6\}e=g,%/eGX|)|*bs, s};bMgsV?UΗ׌\fٔ/qn{7UCr1H1gn+[G[Dք\ؔ䬝fy6Ж2Aj8'sBs#AaQ5.NqVR`]$Dj͢JḈϕg9_4Ds֘m{z\1Vw0fmM +,|Xnڐ9_}TaɧLJ~xˋf槯{nO $LJ_d&K~q$ FSQX\ &hՕC)DRKCKzԲgȊ||_7&if3Iѩe*a=oW#AըgRP:I-: XCR)Z묘$fG߻PCK q<Qodq:TkiuLk酪VǬ 3Ֆ|J c8eE[[a+p_u/7O`TN8oH;o|s@{lwYou/Xe_l+n+XU@sl%rkF5H#퍓yT LrWVɝE0Zso HB</b筞0^nqE0IQ>(B1-ZSWzotњ7 C'>ӔU ӓ.&ҩ-3O7U>=3L"R- |G`669.@V9oͥ%VGL="LugKl lU>*kgϝ{uGV/K'}Kvzm tqwou=V2bw:jAz8e+&:qI*Vwj~ o %o-\h~F5X|%!,80_ <(\i b) *E|鄡`MrdA(0#%^cAR ,WZx-Z)=z`z ~wg6sYaŷ"Y? 3q V~y^f@fSY>yjw( J!g]AA <(NV eaԀ˶,thx!eeVYQYkc%* 2JEE68c^< U3;r";_pɐJO.$ #INw8nMǢTTbEnjt\naJ\]hDNP8C3 10&4͓nu*JÁ16. 5-XP"%ZU0+HD(?P%!RͬQ^eaRZ--g')L%2d -BNQv [l=p4*^o ֊PN9xh!aFk$ŖPjU0YY . +^Qhɤ7X'!C6)bxU*.Kce$ԁ 1l&My޸2M]5e&7$%e)RpOK>ӥ֌Vv>e[L"S}wdk)Cŭ*HUw;_0_m 'ѻ5a!nY6U911M; t qob$hI$%s8'49O'nq뵡־1CɣUvF=kgeij[Y'VuN9(\75V"bol$r85V$vXOÖUk9N[ qQӨ[YA\OLQJl2鳾HeY_${pSDul#kb^ |n7b?ʼn]Cb iK~} iTBC,](9HPMcc Kj޻YMR165HH2' 5m6 J% s3m%HDe%X;A邒QUuIU{IRڡ;l54a!ԆH1uyiI #eſx&5aTdYE:RwqLwX%_Dx[%lkۄFp9(% ! ь:{µ"X69H=;4F iRkͦ=\ (‚˖n4>"m_H/][o#DZ+B^`} C`o^rcHt Lv+rח=E!ÙwefS%*Vz։CrfIK-,/Ѕ+͓'%ަ eWba*I9CGV?V6!UͷŸ_ש!ujH|lH|bgU&KMP.1 Xt" cS9W䏫fث*Xk7Ry7˻ja~ٍXΫ)}dqajǭ BL(uSe-Lk@*T=p2yig@K<80h8FCkԎr} REmhoP+~`5w_kԢY` =@Ӣn E+g'6GSgG<:յJRE, P4Ud::F>@v g78+0/,' }a9f'Thi#9,DG1NrBpCn .(D0ꅜJKsnxS8kOkku?U gV)l}m5v7N2_Hč༝ @2d3a2dT"Ioot"Cf [^-!4~Ii0[LPOv|"+x2m9 f 1Ώug&6Z?𼫺N봫N殪 X5ABLuammjn@bYa9դFh}?+1711JmM]0C䞂y*."J}s:UGI#\+ezJ^)CnʭǺSh96 ׍Ibx/{=$*lDM&YQF,`[T,~Nmٝ"m)_Ֆb5Č3-;: R-s?=˘W k̂5V_.+V>~t=xF84d_<<5JuWJ <59!xzadh%"7`=5Q3 .5 bZUsBL Kuo6ZV2ō}of/Z%Gh#T]DZ_d}QTeRr:UClEU! d iP':<2j]1ptkCBOqs;tSÜo^))^8"qOeg7NZITyڳt$UtDU0Său#N=K7SNS7zm?2RvPBL2?KqSdjҵfc~=s"贈'F7%:+W$XOx ow=󖡛"_?def @%Mp1ʧ5 p^m;(@JQ9SM<ﭲ40Ggh"m@iPuji+_p$mdOr'eZǠV1rK .A1WH(s$lnL!sqHkZԆB`H=H$ R;mI7SKSl E!~vW>VO6QDn7ʎךmT vQSJŧ{[=(<$!܅O Ro7 Nur#: M}U),P_㬗:VhTa 7՛7|Y[E9_+!a "'RZFRڷtRg"/ }h:;B` VNHsarY?hYpFC$ ,/%!5r8ƒ5@2Q41aĶw*Qg9\1s:>\!|w˵3Moy$0Csס/>?|oۋpcdT]K*/|=nW6?pq$(ԶV,aZ4JJ:pi ;)ܵi?!>skE #wO ~'c= Aq'5RƎ heUޣ18b;h%ش\IᩖLI0A=\X\lݰU@ac?cyZHH]|oL=ɐ:fzSt$!eֻMYЦǸiUZ5WOؚ1faq?IL /MjncoLrZЌīN[B^jfy) C"2;1y]oJHa)#N7]`&M 13Z~k~~7 -ߘ4ܡ[Ar ~*2Q~ap3x1>ScuoFzSͺMv%8uʰBJyJlڶA<^:t$QȝjZz_s(_Ѐ퐢ޢ(n8EQ1dp1jlIp8!Nj=zvd>xzz˻e={$Tv yCH /SފR[.m4R@?o?RXX$'ʆˎdٗgI.8,SrvIT6.̋r-J5JXy͘43 CUj6nv/-^ݑ}]tvA),$N刖OQ[i jkQ(I@{b cgmTRbte '@KW׈CE셝 kD]vO$pRnSmNJbn$=a]}dN&| KA$D! }k gQx:K?a%м}n2{Oi6S)C6G!h':/IF78եb2t=~RȖrN%=GR [x.>fqт1F*oH)_H|ˣ(u D9pW^bRcPnk\~Nj@q]oT]ȡS ͽG Ľk 㴮5p^וE* (2HМ{qwKs(-Eݫ I,EuDRM;PZRq@ʓ0NJ",d"\i޾4i/r Um>[p"Ѝ ǖ*tv`Я`XBms S[c$Ckd"xuԔ8 d F6CxV}/xmC\2B4eDvU pg9{=pz)qs$h%t0Nes.Kj*ovZ;8J$&7MӠ:-/\nXȓ{5'&ĊO+0Rr̆D#çFi)PhA _^U]tȻch#II w/}NO*)fv6ywVaS)hC*w ^hoQC"dm,D4Ҹ uFEb4=pOZcr(oM__t" Fr`@LYQޠ(,w{`CnE N ߦmpY.QR! )mJ^w0~OՋzbiGy/ XEQihpF*ɣCN܋351egtײ2SUhemgx-_Oņֆ Z UrD'?V7٠ly{k3¨tѝ!KNPt^tc/R̖ok8ogUăΙwVsA9N"s]޷5 nRYAu:#0R nU^d%i{jJ"@f/RX#F~;R{!ԯXW2$5W߅>SHQI]pDsiÈ+zTeY'R[߇)J Ps }8TtC (s j0s"p?g `X!Sj\K3t`%lH*鍖YWg"nd H e͓k ǝIζ~ ضl8 Wڭ^K8Y Npä՚<5ϩ]C< z̈́fCοn 苆(up 39T3\95 ?RvkZĈoFTĬW?0ŋ`@}> K!KO5'}B2d6aL%pXY?q&/aUoTg[Mj ɦX\{<AT֖TR%kC%zz冒G44ݚp(CLYɚL'TM2m,F L&VYM&55Xgxw~Uw:{͚V[L0{>;Cl{ nboS)h/sީ2axտŬ_t6DNER׫ @m9XE+4TBjKn`O+հZl**I 9r۪*I51 :DRX{d2b3‡ƙ9hJ݅:ȩyBĄ84b?*F\uZP LG Su(upV26} !4(gi]ioG+>j AB_#ML IvzF&))2Nb]]Ou]]='xE0 H [FƽtO Y0w/p(w1SwLHh`< mRTlK7v$s}.} )ܟye߶ BKE"CvÇ/bKw#3lm.[nl;v3&A0d. WI*.(= сL&W+5JgIftE蟃|tsTn^O+a7 *Z|zs1v-\\;FDEGr;\Egaw=a$Vmy"3&‹š爱dwnD~SA+ʧ>Y7OC2ΧqAyiwF9Z"wQh(D+)@ RwL,6Ħ& "D3Zm[Ϫ$KU>Ri`q鈌dP"@էdlN OY#=9`pk}J<>tPܤ[+nJLKLrB\En %1\4; r'?; Q$x![)Kpv-R OcD4* ~p%C"S&saSXk KTPYzKH 2*5Fn~ ȡT beK2ơ$u4_ygЛdKxUx-(B+2E n s>nyh6i5! 00׭g.:뫺ًwckRpS ;TK3 .%W٘V*op5_aO pȵ^2[7nB PgU jzWZjz[$NCSً^lmAк H,n̹>*|5e#A+O|py!,(4 uu/߱=*TPRL p¼ܩ43@$m4W{7wӇqBa7=GT[mƝkA%;_ s-84%kor=K6/e>ކzPn[4R*W4v7(_UAAKc K׃C_:we w|]vY>5#¹&A, J9KWA#D֥7X! xK_-+«V3ぉT,F AK{]-XSNufЮ%ճ ƃwozDKiDkD\\D m \sIUȩN#U\}e+NJN@|dJ\b)mcx62c-J V"lQǾn~2(5?c)ΡuGlW_۝O}Y;d/mm46_z3t:81K3vN\h:}GLI\s?f(tMwMFa.W?h |q]832}h. K8 Y/pit@Mv7i 2̢3sBy7TCFq\ {Wb(;7sBD́@iAL_ڗ&&/|8!hZ<+mBԹZ}CAm/ݻ7O\;luiW?vZ-s{ Mkğ&_7p?ӻ7(c$AGuhxCnf߇ZC;|t';xozwoOlplm|kP cZ&HߝO<o(ޖ^3Ɵ[_uoxgAm~yxg5yz >A>QcR9cOQ%Ԝ`YÃ{=P&Q$I|õ%Saą;~VB3yN!~<~e-]v#4̰ͻ(*>?1}.yQ?uNZY]eK?m` }9_S& wV4HׇHU6ȗZ/=2]cU+ݾ`nӏ?Z@7I־\(~Ѻ/aH'~w:s@OBP׭V+}5,_}s.&?5kΧ~h.Hx}xw}0DiC4Z/8mv54_'!V[77!ֽ/ocsP>[ޗ7d,b1W)Q=u>R3'd JHc81b @:ɜ\aϽG`BLC3<=:Ubٸ37"6uc})$f{p!A`f ! >R+T?r8JpOqL}UfCVap!X4T ,(QB(U8TT4%>f|-3L8M;mŤXIƔRĨӚ X۹2~oFs׺W <0e101smZu{䒏{sTR&"W$@mAJ<ɊYɊ)] Q$ R )Y)RzRn 2QV@``Rı ';Eا1[VdZk*p )=q7!$s .84b[*|PqVF$A~R8!=V 'I*)Q65|~ dY2Xgs:4׉k-dxB3NJ4М, (ɉ /$  ,9`dz^㤗JA.=`I2q;Yx"U =UxF7aXTp!`qKq{I:HMel-!rrA|v\C$GrsЮ$_qW2{m[K7nZ-_D(~ZzIG(*Nu3Ι.!ft(&mG\@lISXoUZZO/e]\WT T¤rIBuڰ"4OU{,LZa|pļH`߽DWt/Io I;UmW,0!Ϸ aRիTi76*&|Eōwpt(S֭F/%nl U*.. _>P d8^m6\M6I!"9'kuoK$l]ooh * /T4Cޚy6jm>ya;(W9r)Ljq&g{ 9V+k0:B{b⹬H< )kP :%ӪnP=D#@A+ iM"˭uԃcIl}2R I58hIJƐ~Ⱦ* Rc$8ʴSR B0A bAXS\쓈T?@q4ÖVBSJ0 j!ZS%p!|fj*({IKD(bHߢqACAzi~L4!?6_ 97Bya98KR, v,AЀ qq㠪Au(Sl)PRPzIӜc+@Ё_Bq(5$gԘ$2G .|3m(3:(1R/fL$(1#g*zjT:Q,{Ra qivR3w4^*}Z ɉH !^Zgh\@9OlB1"kb3,"04NQ  m Z )(8x Ɖ~+r#,X/@~IѩHp ;$hx/>f@8Ñ)*Hź_qĀ a$"Bd+I3?d,6.:fp0p{*U@+2ā~=e5Q:TA6x/kXɋ![=COmWjF%'2{|>j’-CZs0>Ee61°0`M[[ ~ `n4eSbR>[4(, `,(:ނJ;K]$ <<p dj 3kZYzq,TS͑!=QFyN+PNFn~,ॻQtPcXbME\*X5b#I[Ap0UUނA!՗!xRRAZpUF*AW!\n^s$\%s܀f| SÙTwӛ-ƣo>ZrA]pP/A)Y+≞#RwT‡lA~>BSA'_q6SeE0<`Fkoo8Ϩ 3Q}n5Q Rmǯ_vE+BŎ!6c0X$tjtz!Ō~VU^͒trX$5HנB}*k$JVŘ ng>T! %Wj3b 7Rr C9K|}',H$CB ˂ wmƕP`CݦmMD:!P>L,e?g?Q Wʣ'n2;^oAD@9 aU(ЌkBM87 u4ccWmn?keW4|&Ŋ65Qm%=:gQJ`i +§_X=&_3MGhu fJf₊BJ}?4.l"~j s d65c&;Vh|$>T's]X}kR ) =ݠ҄.n ~cK1k 9DqmRO|+iu17/GmJM}^iN}HQ! NCmj`vto-h{;/Ffq}@~hI 5ִ=\DjҪY"u6]2,i?M:IB]J#`_~o ?[Ӄ+,-tiw5d[oƓկJ8`忍oq;Z"ZZ^Sxm= kAKzLж޿OhV$ͼ&j)fϨ4QM;[I2GWOwdtQ?\.G!,e睳h8wd.<~(eV:g$Ivw/pUYRO,,T*aѳ3ιHXU/liإ-="r"rt[Tz67 xhDR.I xC@2ٿDO{ݝG9w; 9}:j?yeTRMXw|k|.vqy%=˷$|!D;Kւu9`Tzf똠6T΃{ۇd0gJ_4[51Ѭ6&O B8MS M@ŔVNp4yUp(X.*~N5/f8 G+Mn>Mb%VQv:{O׸& Uȑ{G$%ti )Ap:H2 dikE.'R /YENb%;疔+/s*>ͳC {&h{𧓆/Ns y8äM'}Yi>jBd> ^cV1'g*&@[ɏjr7 9eZ=u@pjBΊrYI]iI{]Wl֟w.O/'smTcIg{Ƨ`CBQ6ilSRfc)뜃[?bGQzžUH8?9r69O}y9?>vR9]4G`xMZЅptp*4)M׼KEX" B^3?Äg h[ !9rt, 7[%A/ 1 58md>v[TBKcnMZj%R+7F֛ɢgZ1F;W/#G2ГBASJRD[ f /--0A j_ 2Cj\ƹt^ Bt杖@Y|^{C#l2]1٫e(s2,epXHĹanE~kbGLcQ^RP0f=(L2r6X,mtX<dk/ !8yk RDf3‚ͧFdR`#qת 5+ҨՒ4M HNrJzڒ|`k/DŽW*C"&3X8_E Li0PL EI,\C!Kq"ҁaMs8N9"@$P\bZX+`F˥\>>|373 QlZarlvm|7"H}=y!K:}oAg" Dzv^h/o0r.W\S`/_%ITB]+o9cb8O+%188Lנ8XqbEE!aTitEAi,fao?pP8⤆qORN'S"hn-2 pԠr%^3aWIn%,A5P ZS Vk!5Vu%QqDXk"d |x\|*P~2IrW{3u*2eVAF(٥܊N5*'Ց~G~%_?yR~h ,ITlOXcG.=8vtH~+r>dO#V˕>T.kh`ͩE=^y HwT4 ~%2]k"QDڒvM~{|庬zcPq v ivoX1G~iO\ ξyE9+ ?S{2'!lj/<ftR_PLdX]SBQ:j`x)21yN(Bs6j́ǸJ/b ̦io0^hP/R^yzW]Ph>exb C0gIp#[dPw{ <8~[}ϫo7a0=.A,d]q5Ցoٛqf7qIެ=1xD}OS̛|ڒo)#40yB19tkr8|#|*)s܇yvEHE"j8TUY^Y {΂xQkr\u0Gݡs?p? %8/PS/čgIVꖖBb&iC 98VK)|E/uBJ-\xҙĘ50&UPi b]+!"V# 1/H GJSl&xa>q1 eB Fhw8nuK /k$=ns('qg(DK+%|I-qb.EwV$㴷ٚ&!HKqk}R#]%YrQ|'OL^C<mM'.LSs9%u ]O~CqMWZ5'߱7bdT's&lEkR es-'9Wh/.Ă/7\SfNb4g{+P*1׬qѸ8=+i<ZLPLnX.{)Sn)+UGA6hފ@G,cx9mFzJ:~.MF ӆ.vğˎN.gx:ܷ ȍgi& 0 $=`vr8w$G>+pAnS#.s3כI^C2dU7 t^;ӥOgU!JQw_!4k5]`rr|fllV?nK`{'4 N\RĹ-ڞAQ.T* m^NAuh-MY"JW#ؒ"kX1½ڶ"\ut<#zApK 6m0&6\\,=/J%zL4h|YɶA^WzU_Qzu!]B'HeCـ _g8|>\y~}4 825Sm()W>vH'DWɤBwtq4,F99 bsqu*PZk X o ØLNY6R*Dad2 a:U {[׀ K*~P P}Zm;EQ\g0f(eBQ%2g$=͐aЊmѭ-8\ R[zPeO}Y;l\8r9h`Id b&B×l=8UqKw61K)h@`Ŝ"3!JN 0DL,,3`&1WoZ $8Opb,t3Rr wxFEBO ҐXkO͂j=NkgjUL)k731pȔw1Rp:V4ߣy9/o3;:<5*u(n=x+j ?9<_.~ b({vNlXxrs=Ohw˿ܙkg&;5d??N&|$%L|{p=&^qXI2|5 {!}5L2)$;[*幺BXi6dh3ygXl \^_?hl߭vt.Y0MELX̆ߛ?޵q$Be ?=68鑹KZ^8AR2 {n08 du1nť?4pL7(`)E ѱҧs^ld9ʍ.g3xȿþz'6RC?E!S'Qb1wP-«z lq-;5;z(«럮  i40ql:PT/NҧJCyO5B J6d:ƅ+CJVă{"3e0VXHF;bye"2%V %^߿6j߳Q4|"vHI93Y0&#Sd* 82FF nd67ٴ^RIiK pf'+sIj!.;as'ь2F5YMur e}8/o)`aՃ7^^}P͗jC[[ CK5^jŋt=ű"E*r1X3~t'5}i 渽@ijemnoRӤ\b*;_ZZXbhBxhĸa %&{ 0%yD$ϔzJr8q261h16+X+D"BQjO+ fޫ(?$3;c!ʧbLO=nG)/xPq.b0o ȢT b"eBl,RK`8I^TaMAFX$Y:XP¥1ΩeYAUjKQNGp ׁwֈ ,S"e~a P1I86N"2QԆH)iuezK/g<+RJky˲@6U5,>;gEc;B ݑ>ۙk&.fy죰+KYe+JQПmB5߈Zن|t Yg !Zf?x =%{itU ںcEtH;FX@$Mh?QK 2 ("ʓX'Ĥ*`F9ʕ Z1z%lbFC@ G;djǮ @ pV5ӢR7vOrzl.NW]E_—O>†aG˜:J6\R0*H ͨLf&f)&H% bCYbJX/]FP&k*^&j|_(,nPɮO[_:Z*Nͧ&)k,J_}mY۲;` @|b+J*0Eje)(BB`}^al(Or*qcje@*͘/+@0ةLH^ mZyZDj 7ieb.[9l=ax> }QuOOH9a ^ 6^CKԭ]uRb%oR6"ĹZn 4c"ڰZnK5qQ?nj w)R0F }r8N"Gq&}48JtGjҝ4!83sn@f.[s;y(Dl:ٙԟPzpp]1& },yB ?N&îGތ7OSv3^“!RѦsBX=Ρcj5sa[@"$lzxbͨrm֌*pb=1x ڔ YX lgǘv޳&\=UR^8u 0$'\`P4Mbư)FX @OjJ3S,N5<%)5K(2cslQM6X`E\3jxw3snp@æ9p2q*XW͕Q4iD[}aN 5 6XdT^X'"I-$Or0hACMScf-Mx¾&2Xy7UE`)F1AũM%W\ޓpHh9PD"Grh {48t!NKܪOD,ESgHgJf//fw0x8`r>h|[Na$5?&)P* kecEХy2uO|ћg߸+W ǎ6}L|dzg+`’],a%+FnrVvZpJzV7nc^ME4HrZ_ڍjuB1(#:cTnFƁ=&VR!!\DCdJ'eTKMyu%\|㩜 : =Ε ҧTkqr6N`+XL$QVT);4iz:Մ`eeuzs[$WOE'r|.֤âk6H !v 5ZS<<,'/>})@0{jjiStC)秞00$-u޵m, 96 _I98J$*DKcpJY՜\zY%SiJ5(ф ocji.v q b Nb63 I)75 }wf>DC1(Q%rݝoyt"xeuH3„&n[jP ʈNU[v hYVA;&NA{$дtd:8Xǜ vph׳h0Hț?:_7뫜a\PR,дGk_ Vݮ3g#ݯ/J!`9uQ`n§/ynPpلhy DV ^L&l"[;|l:xn5)6%x_-U>L|٘t:nHua/aʔg<|xR2[Dk"G gDZgP)ts]kQ4Ndif-욢^ CE5WV ZlkPf SYz9q=#B+r|8ݙ񃭛qQşͦcwU&cA,©-12"#ؓ 5)?y=k\MYsfHYk^'/_;5$b I?.v.٬Ib2/:Uk~vpǸZ)ޒzP)~}Yz*~z৖.%{.D'`c1{+E ^{mXl#6JKA+s}E۴t_9ʪF{e[ M1o hR<3;zq*gpMvNl9i B |;UP X&Sژ6WJ4Yٯ-3B=AỷռiYR5!m}Y࡭r>=&nM'5c+->f_{纗UbhBxhĸa!Dq'4D^|{J!eu'W&兎 PF8 .eIlcaXs$5bk_!тG!ϜI`6CG0L.!<[^2& F^ ȗ |&~z,W[ys/iF&6#X s{ w67xA:kyG).|h2*Oq÷/C1ȍ~3 unpSoAcIW(v.GB5 ge@=x*Gl b{B9z4GmHqs¼wC%Gsg ̮Mu,z jwwή85G ?N&îGތ7OSv3^“XI4{02 ux8Afh*G[g zY}d_,aޣpZXH˳PkxOWak[t("W#2k/q]=r7sf+Hi*KΌg^dyl)_j>utMCfuV_DK|7v=~%8ϭ qk50>Q:@agHҥe P  qR1%i#^zB>ΣRB+*Jz0:RDI lɭVqQle>BNTLE-ON$7hnjCDSB#ff5vu4h]=unNW(xt8(xv!#wN#GAx 萭؉keJPi4$j#TrZ9*"'gڃ#Ar<$tXC :?KQ MD5]s{rr} PnL.iJ`,ULys>K>N0p}s0{OF\Pck >^ҁfRAw,%z8>kuZDG.C=Mʓ??jO}q~v5]zֱz#SS\Tdvd!rmcS0c6ޭ-%M) ZwԻua!r])()`R(;kΌ`tcCUbgf6\ 2AqGw:p.wk&=|d6jf.VcӞ-F:RRJ1 y+~8lJCOj+j+CȒ1ZCMB +3LېG=lВ4K}ѢJg g>I+-1Nd 96ht]&l 4HHp9> - ۗY.#6}x[^^%4adyz'#NpdF&0\fy5\"8.8f@Pjsc#7nJŽnJwߤ6BZ>~cλ7e͇K og弤"wc̐JǗkQ\"I߹ݢlq̈rٟSJ3OuQkš~Ƙob,u@ƹAlg ].;aH7S9Obt'@HmszC$WI*:O[tyА Zoi*@ <:"Oa@B̘TߴS VQ{c 'rCV; [`hi}ZM:Jg׾?{'}vW4tG 1 1.KFWj]; NFvVsAiTS)pƹ]FOE!b60M`3vR#6x O4!yfYL!=}0! I'%֩vʮ]=Cg]jN)w(`fS""Z3AZ֤#Yen7v;(%bF m=4>k {E7e "uN Nщ&BQRQߜ7給L6xUR !j5 sHXRJoT]{CHKJSX;ɎUw9lvK:t[d;j1 _pwZ P(زPlyAJf,{|Uc \J|@&UNAıe_R-U1[TQU4,ӴJ]vЪ>æR4RJ+hFv|4Ϊbc`V5E~k آ'T]O_jrQH-jiݓZlպ'njm߼{} Υ[̺Y ɘ¹+˧nݫQdQZ@ @Uz$SZ01CXyXyjUZL twg[0j{L#8:fXa?]7>fA@ۑljvF 3ή/z6Ư|XzIINA*Ih 8D鄆 d2(+$j|1MN7.(R=nRr^tI>{Hn)jˍ' \E5HUjv㤕ęNIs?փ=6yZ3)d_Mf% S\^`CRVCRåbtU^F5V =Yd+ õS#*" vjWחCOYTTJۓ(8Y<8%%qΓk6y)A!YII/j溷zu) <=g]&hΜ'snTOXE:.: pyMr6pw0H@R@po I*isH.tKٓ5Bj| 7IjK~#F{N7f\0 @Dkpz'2vSۻ'/Q65/[ּۡV_Bӵ4R4rլgCY Nx05RJ/ 7w={l# >#fޖwo޽={P{=t{[f8f=@,#ᵩJZ@,:3@H(Vj0'),p0JYԒPjܯR#OFr=THfËYMo$gyl%C-2'MߧA᠛{O"SeCy5_3YniR}Ks^4ΫWh|Y5>_=Jv2s9WdeջG]\ "WeE9˒ٞ)e).j\;?)I}wv![[ JL;xcγ/y-n]XȟD)AHvL J5;FbB1!F*ceuQ;V®{ttݍ}/i+E:J=Bk]HkL,,|wM~ߒOW6bn]#^~+".IUt&N&J#d!"!49o=a@1Yj&]7v-vMq<6;ӎAű h+ w8k"˹mm=kmk&Jgn74aq'7\~1txU5Lrr\rEk L}0 k= E:^{G )O M eM!ǥHZ01ȿEnGR]nrP;1b)tB8J{D{`CO8$Z=`}ۗ Ֆ8NSک킯g;<:޶0 5GqAӃr]4.ר!zgPn{/(h1HChҹznh}D)1bSڹ&aOD(b<(aB8 &I4b. Ҫwg#8k#<$$f+V ^"LHEUgh$5"q3CӉEZii Kh8\jnLml`{,gRUk4,Ҍ!?lw -g5HAs6 #ejNkVmYp(Pid F*k| ޑFckrR+\B*z_ޯIF O_h@ɣq_'+N*y/BL6KyyoW=󄳷|!b:_}ͮu`oo\FVXp@MtE(tK]/\<;85OE.ר\~{aK\` s>GˍL/>Tˬҩ˛^'_m>jZt:}怄,lsbA @N ¥,hp~/_m!4 w|H#;qRJy٥Reic$Ԡ8AyjH<-TnI袾7{?rWM!6d;E^ !C(dn}69T/N5FfFw\cŬwa۽\-<#0+ nneIUM)NċA"32f#rObZhQ[2m_ԪL`ۗl@> P"NR|Wyeo(O֚D5$.ciBGhb 4rxǹT8f3z Z߬3;G1L |pـSťR $g#hD~m! Z,É\ų4,0N2w9q;|J%C<$1އ\7ɍᯖo0/4iJgwK-]>~xbAt)GoW N7gZ{6_i" rrCѴ -&0˥ZQJgefQ"K"4E>3;YVbt>ݠ>Qm; !l Tz5P2vB90D(P- l %G2=HR^צl+ܲ9~ě:5d ?ެy0V`+"5앻IABb eYyI#]S$>uC(/h&%Z`DZkAse]HtRdf婭DQDsQϻ16 d05 ._riHbQ@[l`BEu"+Bi0 ke^pr6ᙠ0$%'7bĤ$KfgOCV~/qPƒfa1-!~)  m+g0eV_.qP|bٰP 4Ei :G=ҨP=e'S%ZeqUgD2- 6>/N.h]xѺuQb^J!X F[,T(1 1?ơA#gR3R6T)/jK.ǢXrN' +e5,P23Lgp+wc~u% U%az_oxaxO霎ZFYӅ^FVr}9&z> ϶|ZDm]^f-R5aL*ө*eBr\ohQ뷌vIy]BTs-PK XAhAR*"̿Ν}o[m3JDy3FJK),܌taLl#eq8>-cI὇ Q5W ,?uR9K8xFmfYOjfNϐ~2uQeƷ]lFq6kKT-dIbu5\26)ཉ@g@n5G3I1-H ėP< ;T*]ŧJԻV>UWՄF&pg8! 'lucFZM?:%9-h3 ,)hsDd18yM/5UiN0,8?,`ִ4PFZBqx"*x7.+zB1k6buDl\%CXd rHO41*Tр1lIluUdA1itUtܴ0O i=Q ч /&R7Ӡs.T:EToijw6i^Y[!\Ȍ 3!Aek]!*U`cW< Ԗ`)yD^hԒԿ}hJ!Zլr4_h@iG[S9rhC ۪ޫW*'Mehxɞ3!$]udV[ |H1+6[OJTZT. MM?>>X~e*X뷹SWRXŔr3)-#`p'1i٠[kA1*!IVjC#AB8ʱ"m13J 22 f-(D\։^zEk@W8FC3"cDD%`@<cK㘁|0"T"^g޸g7 Uւ6+ւrͩ嵨l+7Z0WT\/ ^ KƎSx̯ G!)1L?} OGl)nN^78ƴ 1KhmPl7<.?qCzvF$-?/T %m3|U opo)ĻL>7S31=YBE%s A6;gO{_k2lī8yoܹ7cdKN帖{/lŦZ;ˤ_VI_U%9{ƭ:Yʭ'έw-E":φUU|nۋ2j{ωu#z_U[́ 1eq[t /~zy1 *mIn\o0woew ?ĤnZ9@jEmLLSWr5.'/ks$lY(Ւ)6N6f|ս IadjVkS#^(KmɂGnLд5s}oMM\zBh=ώxGBh}1Cq~ mp=zsv`0L.ХCpzzop\zE {AddDqdR8 N6 u4 [Ld}k ,*>=jCۡC p|Z.k͚ 6[Ïn$|ȒC\!$$5' ~gv0e4-TXNµRMv4 50фQ!#R^j >%';9r ")Eák ^ ;8ZtD{ݕ8%/| _X8Rhe5 Ν2C:$>nR+Τsɠosv`zjR I7&#gܠ}~idnQo`Opl7cѶjgYI 5hTkzg.'bR.h)$0#\dࢳ;P)9Z&j\ $>}6CIXT O\'zv7霾q蘺$ ~3_ݽ.o4zrrUf0 a^t3')gQ?^̄d+W/_>W޽}q߽YOyqً?~__s6*m}z>`]ۗu2ezG@)1ɵƭo'ms֛j4cWM{8?$_/oGaAo.yϾA"8Ȥ9R a:?=EِSseIg{Ħ%A:w]%Af8XNUrW}LhwNz33!x{0xa0Bٕ|v%Eh;} 7@rχԭyu/= եzT%ìp+SOL߄I*P?nG`>>^_=y=H>0tQIw5Wi޻Qq # ^oOƐ;~TހYuNeʳl=٫^ z~v ٌ5՟EZ }xΏuoP8ҎM'uW5'" m>v[8x ~Q@߻7܆ e#]ty,}lݜj.&jP1 m\ Z0Y<>1[/ѽs\xc?Ʉo"aO9'2_e3 2'[Y ?M%↕Cbeu$Qhx,XlBX3LV1GtnZi"BU'?zz8Je%:4%b>)]:c?-e.#T1AP1e !E0cDQ ZXE0<QTIB%RqHh"$#ױU瑎eD^!N@4rsV3r%6,Ss,Vq+A(vNS!wpjiEU{3uMڅ3IkmG9 %_à ܰ=OhLgmgz濟d;-ǔ(r6FO;%UEXe fyF3e US:5T:K;f6L=gP=  Ј‡Kn((I,:scʱcRX{m;1.J_QPj.pS< b̐!`6-" IĔeqkS3L,:WAF<LkS9fI\IÙ$'ÐT`ʕOj܁qD3!80wcvZYA^P-󙧨B*Ϥ!*UF1r>aUg%)SX1vQ؉N/pmcR(6;  K,_p-y.#F94j*ACh58["sJ1PYE63rp 9H@'"&9( 3)C-2Fp.ˬF6ʩ9ScZJ0(1cj$+pJw67j,pK]|ԃoR@>HGO\VlZO2ھ4.ݏ'\WiE7O߬QiP>j?4 "D`xw1ɧrEÝ>^_oqR Mt*mwf+}kD)7ߞ_S 3J 2npy('2DHޔZ>bNXI:GRA,q:nY-v=14ǺqYm)K! ^ _,YqDHHbMn,Ik@Cm78PR"Fvv7v!;|e& A1bAѣ۲bۍq>ےyr(w"9Aѝۉ䀕`+LG7@RQuGuວ|Sy`r_jw:LJ7Ln'bRi2Z l<5 Nr*җ>wfJA<OŨ)!UяZ3&81#(VYե ijX 2`x0,((8&a֗aݑ,x&t,qTm hL [vsypoon^џ},Xu6OU&+;IW|X"!aA%e'#x;)*~aa_lguѰy.8RYD!E7ӽ/ռY]}n9Iz~PQ.q**Vh44z J5c)}%Їl[M9qXIrOmwNV_Qj{&`$EW3d$P8:0= 11:JUO]1~MCff}qvb||z.'?2K'<&WcVeR6 kހʌnQv}(:6M~#g1(KA2OWTz0UΚb'/~4>[n=$JP1^[g7Eb{Cr"bXA}xRd/9kb'pNjs/345<΄&9!cR:+F8a:JEfBpbW1]ఠ'Ĺlς C2P2T)SÐIΝԾDPftrLp; #}wP4!εjSL`mǤ&6Ce`rc_s$F^[ӈ ٯw-Y1Tg-(sÈI3$#iέRJq ik0$ĩrhiR5¤W]$QTy26`X$Rs7E&i&3TeLPx<`mH.9HUh ̺0hAoOx:?Z;,0[7g;{()mg4.~oJח.~oó[?wUc-* )ogu IV=[{v>Mfd@,%^u Rp)V CdH [,!&NB(UG-~? Pu!!"!SDiz`uI}]c9voh0"iW^:K`x\J"mעbfw_}q_{(Aсj>A%D HvR|a㋁qL9J|9|l#Xƀw8djw j] g\!pqT5xpUzp&UUF@47in58}ۯApkd/S|mtFdk<#Uw?|SYa>b%f2-ܹ;;~*,拞t>MaLW2xz~K}wf* bzp#P ] wIWX&]8 e";a\9R "FI̠raL1r1'J E|<3[(Z*N0_u7;*7ۑsХYt.bh.] &뀮 n1TIԎPC S:Aep4f,Caˁ0k%Y K${!ݍd\xdu8YJ>h]z|Kק_~(ә';s/.^e36ymTgVn; ߈> ~>Kﯯ;IngNa:8X48C !yA?4ލ,q%,aR4Kf`‰+eʨ |3{W=QCCZg0z8s6!h`eWIT2_ URmn]屯("(bU|@LPOļ@ 4a=Qf[:a!{^o ~,X*_~pˏo$D焼͉`V=7=QvlAfX 3ҩOn Rk-:yaFJq^j!ǎ.Jj1_D*+qFf0eC/4F\ښD}}Ò]z0H '}3_17ɿynM:Yc{vJw۟MRv}k5S텎op&rJ;Tf.qB\:l?w'H'Օ/TI$\IIAadHRЙ¦i%ROЮ)AjvHY\g6J-YJ`(x4,:r2K 8 :]Y+~qpìql3aK@b7{yM"YEu I,_D4'VH1Ҵ1}{hcmƴ9' ;e@5q/e1 c@}?kX׮ n3->M৫ʃrar]%*4Y.+Ѻ !?Z4LCw]|K2?8lbSZlrK'r/|wuCb#4Wwi32Y'`9 PB R-VhcUgLT#yS7mUI ҍ!Ҟfc9-vgLh]s-ȇ[3 .,a}^P=~P+iQwn=u$:Xvon@cȐ, cʃDn`֞͝&s?xׯ2?\{ْ!bHw5VttE{(+N:/XcW?65ᗏztOW aUq1DwF} q{rZ_r~wRręCsņgzg]q'Ts9,nOu/ ԼJsz^uN|Mw:DƘthw:u%J.خ4%C: ޏ >$AbG%J흁%4B>UC L Fq@L<@1qUpE2E8O(bKˬ*c3B6M-Zʘ$`(R#RDQʼn҈8N:$MB)$j!pl8 ՂЁ=.uX9kWk6vWm]օ.›ӣtjJ c'':O><~uؕ ځIH^z*gNn2ADDD ?!oחW#;/!:;l4 ˣ”1Vp( SX~cZ8E}z &no]n^en*{Cc- y_2;-{ TH#S;!b5 r/=J&n|PpC<` /FOq}ϜlBnjbVɇM(ͬYMQu;$;At ti2$STyb$# лqsXyĒ[4:ajbLjKb,J&(($AYvyT1}H; !~A0HZiCTr֧6IZ'^Ph+)Rq8"$8\$Mk囹 Pu*@45/PK-^:{uHWqp,/2n=玗hmY;\h ZW}bwz*s;tK!rľwb޹XNFK}Ae ?\fs{$}:إq\ kjVnmr:o jR3mF; 9]1;.oy/4[$s2[Mu v}wWs&wfcl;?w/Og"UuX#xZrTٸ:GSWaF9w#^yvlmO- \mi<%hMRJ7Yp(ݚbt"ΨO %n+xJ6V#_הpEUJRUW %&wz)O'^]`Slto`b | P v4-Ԉ2Gx%x(XhĊ-?+B?d.:r" 7z"N("G(g%2$Pv崽 -'"`pbM!q!Nf̡Bt^ ~r?~7vmbw mzvU>o |;{**oyCo"-p 㥑H1=Z-9rLofyR,%62Rra .*y}kBk$n6C_>|ὃൃ`?V`3d;x9=\?_ iH˓-"2_ҼcPJ+CC<{I"εFgj@'XHZ+ZHpzK7lG' p=c3: Ub*=[hM`s҄bIOX?[!{҉VxG>)@GP?EIxũpeœljPGP?zgCēklEͥ⧻- ]ynUYMh'8m'q`L˖A~Ć WXx,"&Z6XSyN!BzY[$c$ll"(b\(N8,IC?.aJMGzŬ?Ԩtř:}ÑRizcXo_j>Uk4f%3B([(0l~:p ʓj֨;'&`wXh05Nxwb12={B c=i=zs$/wl ڊғF#*d'TvQ;h? +ycIXӖrR@ VYp?5*7ӱba`'[Vs7uөNK`Gwz>$~I{vwDX< ᳔hfx*5&",R,f4I@p`aT 6fM1x0ց%\ϑ']G>y9k_Q}Yt+ACJ5Nj?Y #!"J %4nϺCn7Vߒno~mCVn,&)3Ji45'D :edH#,`ƄXhrؐ-I t*0&`h]{ZHO`u"myrAa p Rt*tR5/O!"MNDi Ηjc rIP= 8?yC/NzqJ  Rtġ6y%ʟdc*՞cnA?ֵ7[r7E~$ gRfƵ\ZݍfJGA1Deq$N"ÈL.IWE g>ox1"Bfk$44b6&3*q&tB42k9)pfƤ6U\ k1,LY ZIjXʑITLk+%% 5Cy%:qz0;(ɍQ>vJ.-`N{\9x"aˬHQ:"LIU)&F0tRT4&m]xw1RyN? k&k RF'F Xe(j)WbgR(cEh3iaUvc=cZ&KXlbOF5]i:;ipI5-[/%I{;t1P lwgsu@;ƀ&3U'fTqBOÐ]T/H 9QќΒO<;1ЄXN$kﺾ#Pb 9BJ" XG,I,!٣7.|F( j{WJB竉cř1.9L!)% I1'; aH cX4yq #:\_)W)S'(BY:%P3Jc6R3A:Ny2G8 m?ndX%J>_)adJ (t R>DlU JN;ôFq/`Ks Zafm-iѲTiEY( Q¤%6v` D*̸x\cr`rJ%{Q>YأNBѠ}5nۋkzx|ۡfp1Tـulӽ35E#jUVvmKo jp$.,zMk4 J[K7XHa mJ<; Kh蛅 ke%H3dֹ) PXMzOPoU?ϗ 6Ce6wD=u}HzJQsk z( :a=YXEwjDo[y)y!9bZ,(^Hs)#bD%>дя؜ ['aF_2IkKS~F9r8Nj)@cgefa9,?(kw]u1 nйfLS4(Nc$*@H+Bbk11IeDÄ̤n>orԟߟ^糼Bh:j:K\-F 1AU0se>h1+ eKp6RR:pyu&@qCPrXc335,fBPf"C %RI+Y$Z6ȇ|ߗTvԧ+eW6XLBVQ` L&al,`G Vg 8d":M1e][sƒ+,leO ͭ*lM6)l΋O `k#Q2))voIIHbprJҐZbFtL2 @r44Ib< LK3$By%I aR[$1 KbgZaؠ]B8$9N4ZNV$)*Ԃ8(?*MUNX $BЍ+U˙_?c*52Ka],q}ޯ|^OoqQ,9 [ϧ [|fkۋ)#h=?Lhwk&~BbhgBțwe~f ֯^!zEHӷ'STe 475@ǯ ǭhOjCMB)*R W$6VQH8OѯZufP b@7wt'P>U Lȷ/d藆,TLުQ5F0LqڔTȎɸX :.޵0u"N_Rt 55QLRRĖRT&& n:Ѐ>iNN TZ<"BC3w@PM $>wijz">,ç986kxCzid1]5tt+y)Pu\@H~y1Y@dDY//Ne!Z4fGRki1&luѽ <74_/C}/jCq4gJ.I^lj.GT8Mgჿ.Nd;$0/E+N={7,ZW4дZpB4 ?J?9L)ȃ 4Cyշ+0 UC],:*]zf~i{WbS,;4Ӎ]xz8_?S`:`7mڧb]n# <7&S(wLrV $ZcYje$Os. >髂Ffѿͧאn c0VڦB5ƸN8+y3% C\Ġ!5pcj&)aϣ2$w%AU!j/-!3?~Xiюuf'=6'l[H3ЂW3QuO*y9gUr3$tkUS=k1CZ,D*PYl$̚ W -Vh`!w,UUsgr|4Gsy-@/^&n"s7q?.+ l^A5ys~^J~Y'u`Fb00/>j8JPUN*%_'F"CT 2].{uCdV% VweBFwF}/!QFRL Z4m;LVr8 ,1vD!L((F=upC€ ,Iθ`1&uSx_A .D'2]p7~X'YkE_{/wԀm5wA V BF_ }va6-_*ATuP?5UnBh,x3c$_7snV.!\Ìl_b8ugtiT<yy ?ZZ?^l@ pF7y>]Kw׳Y-?@3]vo?^IspGݶ^9;*L $H<6rtE&.SM&4gȮp6?٧,b9banDubT[3F=$94vќT^\3:28oQU e4^}TJ 6<<5̊X+bc™NTQt1ʘkf8"b*EZj걧zؑzGl׍a{u{pC+D!$ }6봳߳x_~aC$ݣ1zoYn| YOwxߜM~^>//y鷓~Wxe-$=cl.ez~tl)F^V뻔Wn11JhF7M3zaTJS}$^4,C3sKǠ+gmK3@!*m{Хuw6@ zAs_r逴燷prq kfm7.๋ZXXeUN}ͷg۷7QhŏnCÉFA]NGPo7?`ߚoCTSge :hIWt}\A`!dĿxXe^2WB bB6fX}Q8MsWﲃB#SN0UvU]:Q]sT~<עw2GB j7jt.(m:~FVJe~LvMÕ<[ι_ez{K4޿sw+oKio'7gqӂ+ΙjҨ+iDi܌rjDIZX߿a97DQ5W1J6 +9bCOtQ<8Լa&">]wu<3ܙmK\VT:\cg 3&x8SZjfPG& Qh|D[r!xD5Lћ S=0A5W};<nkm< BC\,9a{RtⓔvéBn@#vԊHn([CZC7mx1HjZ ՊcAKnGsRS&vF%hx140ъkxs7tAT D3jHF(8D/ JuY VܘQ:0|nV>B,^Pj+M"e ȳH 8b:0)ET]e@s^ Hwtٟ.Rf]ylv篵Oeƴ˻l}O ^ 6AiudW}hUz%Y[UD^}nsa<0a  FsjsF T"r-௟nkjSW+-P6ax4-kU dNpm%WFrR[U6U/jqH9΋`EPDkc4ne$d*D4yZQVKcx4qrkl;U*P g b:i7A=);NJ6ecy!EHOe_a*aGb9]F`ٛXl"p+"7 XK܌'[JG!ySsaI5cKr!ۑ $qu)E8heQ#7 ,$J_φSxS:ֈsݍ^Z4'nكxHS ^fDt7눾8 ] *; : -v]*PݟȾ ޜо-YOiG # 裯ttb/ =m[ڙYYԁxz# hjvusNC*<0EݟOvΩna/yN?zlLzg7ަV!Mft/"M/AH#P:hʴ hk}TyYXE>k[*3ZnWUc&hETH΃H BDoLh 6IP' 5>R-3%gѝ3LA0L D :3d@2X>+4U;j6A\pDmu>"̫,NJ_)Ay_sAKN^YiyדX~=c ԙ)> d:[ ǻ ~ Ngݍg@l-kQHp$.n/L!0ZXڷOıFcB* y\iR :btPI#&;i !opx%\v@.NN9 h Z;s% '@zpqt!P>h8%Gw?*lM1-P~btѶSH];Hee!q0)0I/ dycKh#TqVpklD6OE7ɓh Bδ<0=xC4M1h%FFV. +iZp}AZm}BN5Q'ERc*X%Q'˸|(X^C=S 2#9=~\vPtd~No@0-̺UdAtsۢcI1:^]; ͷ޲qǵ7rC%ӌo}WꠓLtH`@,"Q߳>ʹ$&@aw3:QrHZe EjSH$$Ʉ1;Šq+X5Z2r3*]2-/3AqrUpѼbKiꇗ `|U0(ʜBB) ꧴ZY"NJ;Xo:]yB^JqcxFN]1m&RPҪڡ$хG*٢)}ВҘ0Nx\E"AމՈX$V1%d_<@2 =$#Y6T92sg~6LJdQ ݦC=aIZ'4<$rxf9I9|ŵ:͌!Qf[',w@hoD%­CfdF#6sEK}`sE9h0W Mdno@eS ^ Hh2-)]ކ" vWdTx#ҠoL7.%{] iR7ÙVM2!8WtrQ1MF EmML6*JOgm[tN/1k6r0Ro3j()OܨA4@fI74uQ3qZ42tW5[»qR75G)Lvz&"lg"Qwа1+fmq %Y/'69 - }:ȳBk߻;bk縔zԔhxfC߼h IfCB!G%"H/GU 2\`#Ga%ҴCmyl3I4[ Pɟi:"1y$mߙ#$$f(ykE_;ҝG2 'm8)QىtM'_KccPkk\;K̀ZS5R3uɒneU,ʾŤ"kȣ/wםg{_Ȍ:!N=P܍/x|<;__MeWFkT:'8-B>cV4ZJvC* MA1ꔬo_VwI{=iQ:jCYܓUNAɩM5;6]۷L8udnА߹)xv:_?:Lma^\':,0>EesI6߬nw~تCbcoH T_HWi7y[ 9 r 8lzog_a4ipuT1i^ ;''QEwK_ MPVh'oƿ5?qb⤺)^mnw7]֖[,V|n_PLꟋיWLbKP"L:q2ʹ,6{>ȫ݅To>$8'8& ?(}iYE?[X|v3d i!"@hA ׀8\|b X^m7LPLo;Gbp?鎂xɹO}Xlse9)&iZΆM&B*ĠqS /niL8 ǪRBšn܈A=¾a* k Y@ywo51tψX-P3% { tS!/~1%k" xsUI=DϨѿ'ɴ\yc$B_-Es?_Fe+dz"-86VqNތ,%;~T1נo)q/㔊6E1?~<~NxR4J#t9vx7OZB6]'<(f߮MazDזT2w Hġ8d`<3ux|Z* *WF%oxr|jq`wv\yOg>ɫ{tFxjɫŵwypފsX\S`@dI8/mgQ(**X/# SIF|2NRx V]Fe4S,:W%M4pJO%'4&9osH[WA;IƉ4lu$YWTJݱ#tR&k6o-~ZLPgFw\!`[ =r}䬗 €?H9ӗC}+݋!5s.r r6Y\FcŦVtlAiKAiavAptJmK)Q >/ |Cԣ/#$ 9x8Nک̺9G7R'|{ن"E0K$z8V169VW1/ 0dyH/ZҘ1:`Z"FhF 04S fD1Z%)K !*䲌&S< gX_0F7R "(79 R;}D<OaP?4&:NdF#"5G;w=E((̨tR%(wL-ASK{aRf>˿/KWzQ-ds-;t$^ɗA9ޓ\fV-h'l>.=6_db>/GPgjg8sjdtWekB25X^rrJ_\-{jN.{wu0fuYI?A#bVj#[pnSص1dœ{|c.}!gc_>-!?? b]]~YQ~Yh׫/?-y'7&Un;,kb]u)08k|y4Wҭ>C 99?OmiQ쉒 Iqpi,%ܢǶdEIBʔ@# +2龤Y"t%ku7ŹFns_|ةk?9h]T *Ɩp:&NlRY:,-ݙPÊ -P8L=LTP0KX0Dl t C4OO;ǜ!^rf(~bg  64+zv wh8}9}r=4G/ހL,mL.1 ]禭g 6Tu|1ffu3A3&czXFT!" ɐ:FB嬻s=FJT@"zjZjkgՀCp̑c>Z>;Gݟ}M ǺpKRذ`립`< %a?ճ,ax|N1}X띢 %[i(^\x1o+?y 4K)13cK<&Rc$*P#N((PPB. R4 `17,a,FnIY/`sT={4WN*BpR#@:4 mJ(@j|K9DF9. +<;1(ot̟֓d>L@t[|m+4Ɛ,pl,1'J30$.r Ψl`,BԀ$m/Y)Ԣ44h$CXb$(T$h|b(,1!4g 5-ƌǸrLTҖaT) Js~󺇿I&NjJ/1%BnY IyE$Vލ(ۭ׍ѩd ε\)#) C=,X.NBY&4"Sp$2Y$ƘmO5$Sn5@%J!ǹߒ(Td\`5dE֚j) Ň2RkWJXW*hh=ԇ\zln_Ǝ^~;w}s75ˀorY&c.x?^!uMꟜ~/m=O8FkF0w:l?kQNWT&.V.]LhLI6xXH-Iv;>it(R吐o\DdJS>|֕!{qܷ~  ḵ0b:k;բvufY\+S0ySn̝`{u[jOfq/_9d@%J-uoj %U-*>s$ 1 9*6p>wJz.})FL5bbx%"N}ɀ.ml:O($/"m^>ݻOFE}f65@6Ǻ e8nemw67`!aUc39sE =Y2W&):3YZDIlٓ}C?)F=SCԊM_#&6)GTko _0ه>n ;7ۡÎQΤMS2u9j,Sp%:D0NHa/~5 1M|3=@{;7~FuM5c rr?n(U$ 'RS i L}|zE={$  { 67cY 8D bHjy&x&%)C [ΤݱϤݑCbn FSppT_SD:Խz s DhKnO KL<~l%曁cW w'|b( 1 hxqH iಿ8@{q,^q+s78\qrғrA[cdW~K X*P'2 ⹅>x҉Olv(5jsZC\3L0!.mBX $4ɷ)5\L)IQR( d2"4i"**Dj">&YM%g~r`Vlv+/ϝ ֽ10ZWP$BZ&BctBJp @Ea$9ha puFO$트$fk R 7T3"Qb.Z-W;$:{sAd9h{RrkP qoSC޽ۏIv\) (UX!K]^kqD xخ l5)# 2W&۟;+g.lׇugbZ$11OU'?'zY`C7$ވL0H jI9VM0M(Uc1Y(瑁YQ`5po Z;HVpp\ϯ3\cE> $~3=d76̨&<5'u\BeƩ )cAnT%`qtKP`}6l``HU 3aboj}b2|}S g;iO,2vi̹ }QZ녝]x~݂C#|vMOwE (kqwKC<:h %LREQ 9 17:@ z <;I8A`C.XOZ<mO8d_=;Ǝ)t^]'}d-AC:ݟ8@c<໨sA =L(B2'$BD+Α )Âw^c (۩SKd%%PXY(5B"KdRXP RHޮX*YP*"d8c /?A?vDBe62aCFr}V ixJ_A=Dۇ_ [:AX2vwK^nzMnJ Js f9ˈHlKrk{Rrb8I?|W"b[VF ]vk? P.>ݢ\w-xۦkau] QjW0ň#0JnGذ\S.In[+~˧{dwۀ1jNUw#wۄDH:,w#2y~-6jw Gwޢ)$v* 4Fjfm)F *ZF*8. DT `1fThbѥqxI}`J1Tthb[X`@qm}wq!pXP !E2~gF]ʡ(J 4@@}i`DqK=sLÍ%)A[>U:DV˃ ;MS!Kz:@Y\bD@`' XZI QK+c,ЅĸHpNo BГ9;zn'[luuxmcNWO2vsvWKl zzyWGO+놹u+zulXoǗgS8/T߯twnV༻v{搇 'np˿@gC $X8[Q?Ftwbg0K dV$:U%(-93 v6ؙ%ΖedqI ]vsN.WY) pw%{ IwZI_}H(fJ$BƒFqsE E}HCt.3[Fq8JAՊG^zu+/D8EasyyqKr0A`nhʛ]zYYriU}vw{խJTW-gzy7Ǝ^~ޛ4#coft}~7\9AOIpz;F691E$yIJy 8KMb{F;ҴWb]fa]y@or:Η|q:(Hݲ${^cZmX]1VϦ5nzC>hruNԓ&ZeST-k?VAŻ1ɡ[@K[6ep_,z tu.BIB=߳Y-F>M,Mhpp[{T jR8}Lw(E~rte[2Z>PF0?P@CԊ2 D`We鞸ԭwF}@WhRG^bHhXMҊ2NI=`UhR~ޖZU\Pz<0ZY=}49՛0AjK"OΐB粏VJQsr$ߡFuiګ^!9߅cxӡm->c\}&JSQW1OJN]}`4u;IC&\jg?O B)FZ95'ٖg[zm}ۖde0ATRK# 7&f%SPE''J>8jbڎj;?h.>F)Ej1X ! [,l-da4x!5$ XG jю M)gC$9tdKT8v^{)3)]4:$3s*Sj\y>=Y(㸙ۛ|@fsY+5( vDFvx ]JaP8)% UTciFV>NӢlg۫)Wl?_fwU^o=rIynƿP^6hsUBW1AГF *[^('Gf\.1?sEe6]:Ͱܞ}'a7C]Cq*)6@$i*؄T,Y0/fߚe^憾04zgB@7@q1b㘳DwG<-uI3,m9п,@].2E3Ga`t8NYF8#jZ?)֩O]aZ>Lv8N+{O 9Ȕ.q޳M=ahLvn/we+}`}0o% Nz7)`]==w k'2ܚrboc1OZ`4eƱ.\]LN9j4$8ge0rX*eUys7WGiMgi "+xά.P(`9&cJ 8'kc"ouFR\VM']NIy1iXq$JgFj8€#4AVaChʒ`Xy@;Az4W8M$F@m6ȜgZpPiq3КǤĠN=GY)TG==XnU6W-&X ޭөw;^uڻUԻa!?g#m)xZ JL.혻xޭzޭ M`BցMN}LZR.s^-,ΗCAlGcaF/O]q~GkOޟ9闘3V)gifٜngvqs_Ѩޝ\,n\\P'7 \|r1'Lߜ?USr<{ Ը%&+,wq=;N1wBjג ڤmshRJ ԭĵ- 8\&:j| lBBNe8z({TGI j>o[zY u'/'u' ,vPfk nnHt"qd(وuf'hoUq X /S!0C^WõBPZy].iBҦz9{@ PhՇu9ԥ- a^׃Bm0?G=ЅdĚ635N1Cc,E.i^7tp ZF '~hqkkgFj@ܼ<i 1ؖD@bf8D2!A`M*LrT(CZڛp3^^~cAZpi.E'w: kax$WO]q^#CXP(,,kܕ< K^&sSkfc*(%${,j:`h3pm{>mE N& D"7 A`C&lEm2lq1_`g=)Z$sbFa~0t.2w6LVc,z;i; %(-zFMKx-7V*u[FLmoͻtEO^)Bv>=;~]cˣn]/sXwZou!)e\uG/<|r_h_{"r%7,M(oiD4wIӿ?2Vug<M#JQ~R}FKσr*W:|kn71;41v#[*ZHLDqJA6Q+p$XIdD^0#1]`Au]0T\R6s1yJQ@d,|>5Be% m,&Pe{9\8ܔ^("S՞pLs.yVBwWRRE4図 ]b3J:[Q\r aҝ˜1p9߾xhvt0"?&V,p۲F d.л/}QC=T!8 @ȃ$+mXqOX!B uw.<1W:vo? ˮɐˁo ߜ=odj?:ẂBء፿Ҧ1jLo@b^Fļ-."wDZA,^=}O\ K.^-v C0:Yv'_=ܪp"?_Y {#C.3@)x5xTA^ dB"" TtJ&dLDn x a:J@zjMHm8}Q(spݔZ3Yj1( |hd4U M¦L>Ux"3B=x3Y}zlg˜j֦)WQrVDe:Cug1]F|r1}B7p?|EK*hvyK A+ÆOR[EV8#3NKc\Gc)4$Ce@='o4ttKsܒ6 p(F BR~pYxy:\fCmp}Ι]8}Hfp՛V8L HC7yu<ρtT!F(PMGz Yh{a AB:!p>*^'Al艐5dztlX?2PX.OaB@+aB|A<\WF%TT@Zļ;ZZe^ *ŭ|J;i@!5Bl`3E/؞c=wtQ߻_jĂ R ID}PN.zJyC3P5yꥢ!`hL?| % %A$9|=;7$ĸw,X|xY6LI77xx^W׋ieh$8(RX Z9F9YJH𸁊:h]bP !L}bm}1X3+nlى"'J  لP&2 ׯŵK'iG-1"pի*.r_5>ظ8_Ąb7 |Y9rgo+:4jzͩ}y!I88+.ci9`9^e @cOwU E1_b:ďW~5+MeᲗhd^n(s,WE?] 6+f*W*|9PUW TQxG,hN%b#}d#FZR04:-BMIe (>+"+0IzqL%Ґ&.A8IB#=̇heU,5NR}m.DYЖH)2%w6mnr]aE\E~VT"QS^:$$]RJDBu>7 MAHEijk}Ԛ+Æz@fȊM8#F9XbYQݝj$.IId%p$Y .DhOsV))PqHv[ɉ F=wa2jT+M}SԨjruAVѩNnTy:żN MpDLLZeo6uBbh_;\W>NrJZwN=luVjtň x,dh1 ;;KYVbBA rRk5zLث3S [ _m%9-mmO(m#nKU×p:GBtjŔ}I[r>r#d'D+p '󔎺U+ Lp 0^Bz5JP{A[RW߰.w57DX'7&EGuzÃnV$5vj)oKҕ_╟}YԈm#}s.~oV55bwF{c\!Ǫ+kmnfE=6CNt:MLN訑%Eݤ eŦDRFm&) Xh" WVIv,o\|s\_~WNJP+ZO2ks4œG:G7? \՝찠/]\¹ojBba.!ba]^!Qs^\( }!KĖh}c+n?50/kL`3\ph&a(eL}{& iYz u60_b߸**JdIĭq.[WI|:j uzg.'m@ =@?¥2 =~khq7ߺVz{eN j⨊T(Lf"N80|c8|xg|Yy^pD{,.,-X>u@4qZڵy hu?ON46φiȶNSXito?]p{ Eg>~&C7nz7pގ<s IӿCwʼn^lm6n_v|׽IxpyW ~_nYMܯUyOx@9tw΢qt#իgw~7^og?__E廳?.oڸ`?awU3~ƅG@)5ٍO~p! hR; F)tcy=^bޞXxӵu}=~r4#uٗHt1GJgrmlJH`eI')bI%Q:璫䧙 9{z8?OSxJGKȏ4;voLqp%G)gz:{٢TEa:EyzFj;?~`Uˉwt4.:_aدퟒ_@O6Pe7w`Wxu/?zsһW\{aH7~2܃n +,˹N~rZ3߻EF7 ,T;"r,F$c$")EH4i#Y_e^⫑3 -6̊;(J 㗂5e=gr{9\ ͥ}gcS&uyra5O1 I2[ tOA 59Cz{>C( E<߇<čEs6 Uq.ԔN^RPRkA}&Wfɏ](?y򪂗EcH`Xz>W5L{ ~+ǀŬ%`b.ě5ly8Ypk߈eÐbt/oB<%5/;SHё7Fc)N ˢBi{Vx6B-o-֌LǑjf\ "w_Iy\QKWK>jqLnak? 8a >($6mW )lȲ<GaOC)↷OEњQe,ȗ2^(*-CCH$b.$J1aa`1)Lb WЧ4s)SVNʽ$W&9gv r\>4NK(MwZƱJDLiO%J3d]P.*m5dqiKdi{GٟBO7 ?r%H0<ə+'E4JWrbNzTRB#>-O/hVxk^%OXs!&d.1,jTYXHDkhUH2r"Tb*UQg'88$W?zvZXF <:Mx (\-ᥲ16VxJ$VL'Jj*Ռ+.wk u|d>EgWd xVrk(i;g8t¡ITI*W YzJ0 glcPjeLhO`ed"=(X0:fYC$hDJMԀ?r'B/.rÀڡqiv餣t^UaCl eY|Y/5^]CiW,VN53s+nХX5I7.UdJ6[WnxoQUŠTcAI"+7*vCBq$SL<;aUĠXl}&Vmv^V|"ZI I2vu?=4!ݪbPFtv ӌlLUhYV|"L)(]J%⚼zq$X,KzP/Yph:B;2=sbMV)ȇ2p'.]Wt `MGbI:6{! FWPtW{<9QtooA4\B1Rh*vMNMéuDzImIyү<,_Vfzϳ`uVt+ 6A:+oïE.mL jˣkF34By^3F+A3\T-D$]o;K*ԇB\{\9~⛢)uGDѸ> jt8 vzwDomWfmT.)S.rOu(Xw`_t7=*H߼r6T`Pר! B5V7~vƤ`Vπi p`0#$lPF}zUu>XeST{uJhBpݺ{Z 񷻳 RUaGd W>HMIj4W+y35 ׅ}ϝ,lTi]-観~rNċ 1o93I$ Dx%R`B54s^l !>SFVZq=Kp-l0O|CKl(@Q`A-V0#Hb0Q,1r#XĆ1^#HKŲ>`]9+`Ti"[:$-f TњAfR=Y.*hwҢvUg1a:NbFͼJayB"gc4qa1G2 _Pu2bKbO^Ye7!pzAalr@+:G7fxħOR*Oo~HE?Bh!,ٛnpw rǃ2P, - h4@ Œ@m-#YqwI Hѱ;x;׭ж.-{h[tqWۺSO 1e]z%3;e =Z,ĬiWOnb2*eYPBD2sgÜnөxmRd^*^{Jݍ^;]RE˩$g'NhnL+q@DJD]"v6UP&çT(#\+lР` t8z]0$* H|?Q;K LHMsT)h})Q?qgT͚;rY}iw0{QW'b} Y6K $X>Уc 2FCjZgǼ5D4vєM0P ^\ [W$Ƙ5kMEd[#-zecȶp#EiRt$S\GLLR)MRN~p))\OEdإDs͢޶b4`cNQ4!0ߢ>rKo~qz9R3 a8Qf|BXaN״2S)wt@gpR֐IF^͕U!N#j5u^XkZ))]]1F;xxsvrkq\i>.#0@P},P4zU9,RZJq\WT%&5eZ. u'˿ACR9ZITMWó x“Dyoq;߉/n/TIxƳW\K!no*0cZ/c_n$SS^0]TX,?)DH6^cc@u@qg~s*K98PȧD^#t}pFPAB2F0Q~_~#VÞ}WsUȁYfNR҄*%4Ǝcjƭ;ja -Q1X#wEXJKzSDFx) 45#b2?$q$O"2b]9 tdA+e8WJ {a0 A})R"bcJ`_fɈlw|LS^Īې88hYSbzO/81sl63 N1obÎCQ<;bPX~|:#-x\޿Ye  xuo~0xߴ?GխwW߿^\ۇUC٤>$v7Rx~X+5E(} 䑰\~\_K*jSⶒRJ8( ҺCD{H!HPFb@^SWlZI˜U[cb/QY5ygcwt:yIկy,\2 ufM61ѠnbB01-]>NQM;Wi$!'N aO!Vc>JCګhIa0/"5ck'I_#Y/6V`[5L߅XՄzD,(<Xp3.l5CLczԏ&f[ D b!By'ItgLFdX3K}LfS s.BrD8xdmvFԸ 5S*5aYI}U i/܇jjtjGB$Uf8еd5{ZGBH ppJRWR~6鍺 7jE.&\KC›TSXG< UCW]wRj^I-mAc\/~UX^E+&?J~~텽+swź&,v_?ҫ/c !U[U,(N2?Yq>TrƩݛ--v )XّUAؿjf.D;_c>TsU0.Ĕ׷>7.e)^:&ӍJˏ+a`ft ۅ9x|!Ǩy4Z]eanwІ9 \i~]>˿q.H#p?gJRՓrJ3493'tt\7-) O$La" 4%.Q<J ,IYr%Rb܍y8Q=go/Òݓ9աh9C% =Uͷ)oT~]~\3xd$[imX{m˫XEiSI6;gh%f#m&dbHHscq㾳VؘW*&E͸sF[1YŮ/uzFϘ1b5]EWYxCWRSmW]U\.U-!fkg"6Ciˢ6C3WLQEp~I@=ޞN7NpS.p3в"В7WOv1JghT0լ޹Í#&~#% 9m}ɵ*Ԋ$(4+E혝`w=tz%0 a9<=hq]@?3fϘ0~kܥ㘣h+3}fqTDx )a x^s.3̆xT@0…bg,OD_J" ӬfJ9=)\b0SRa$((Q#i憹 s `r<{E. z@`:{P=݆О[B!LZS%& iAp 5w+55tq~C䬌VQ̻Z<V\h/ =9@Q,&)ZFU^lHڑ Cc+&±J(* x3PUrvU< SC jE\*hu7wQ @wE *ԀZ՘57)A*=OUHqH|:cj?cO :D)hZ{x ZB'k L/@ՙrc፱Q.>7&==i=Y}X.=R.= @Gs@"@shS N3A鯝yH"S.KLG.cMb~×?V:Tҽfϵ6C$BFt[K_JUU^i&LXzy 0vy@};YR& M]!Xg_#߮6cD3dE+fkf)"EMU7O$y}Jf$t t:s$ó\09e׿;1E#P R>#|A(q@SKX 7A6E_z݄T݆bb:CWuS4҆rmlS [{bU)iⷴQsk)xb`q~p""[[گ TT?Z)Ecdks׵juQ71տ GqS̓3~wq/{ye< {yvkT9o'?)wh嵐h>h1~[6`o b"+ǒ ɥea5$2kZ~\sw=Mt,g#{&^&wڄk4jX+k{=@qeTxuiQR  u0 ,FZaJ&{h1" B*Zד,n_ΦsͲXϭwE~ǬGB B"/-=`(K:0_qq5zZ6?f.8B_^xw;תD]1,EY6bj-q\{pn4X*jpL^s"8 'p|ϲu"ZzLH0?`pe^)Ã+ h8 p\P% ,_Z6$ Z" 5%V1ga$D mL%&-Ϋ|=B /OB l l<&FG}SZs)6s+)af[*!WĠ!}#/s{NoHfo˂0;EbsⳊgd ,b0s.o.4&VEJKPX:& X0*$r9a̮LOR:Q"_Ul 夆oXW Wo4t*b/5{,X\]^>_))er(^y˸jʿN`^0079߮G73YeAB[+1d+omlnEh^2OtkOOŧ&őg~ o6"'B|<=yg+%b,ل=~?TrDA1 6t{ /v~0$Ql7/nJH]~3>wS9FXq/! ^Ww[0~z̫hҜzdP=M< 'zMV=$bF`nsUӟb)wc"Nn/g~/~3o{-'e7}|}*6>%}Jj1{5^[T\yӾT#  nIitf 6E+hS>Ye H| = DWito5XpR>CUtAL.\_#KT6udIoh|Fau)xO }PUA` #Wg TبO\[C[<0֠٩fZ-p0bg(`lvf+`;Ä"*+͏z+v h Ajz{$SlȂ2BT( _(cZ)R.N \9"Ix FjR]z\2u\Ɉ"*6R ^Ro1#;Jth 3Z"EUYgD6,䕛hMI)E5DX1rD4\{M)-]ʵbrv덹5U  |4R~UqSZzpe Jpeލzo}.΍G,f|,﫢Zڂ*:C22wa> n'==i-&QÔwLPpP%DeTZ},@QC߸@G5﫯ܺ\ зQ+7IT1ݛA]L;ouJ\(.:Yx5q>vR8j0) +L"j"*8ΕS TYC`V{I@LڶH񊔝HcQNc+0Ɗ>ꡃxDbf~?EV ^ R'-@c(v볽p]]'B }a;l,3Z0=e`=sܠ]!th)m(<{6 磏42$ôx`3[~-`Hx}!J9<ؿnidLB3 9F` 豓/7M҂y7Y=&p֤zb/Oq6|\o}u <,F[jhTP*K&(SJj4>e0y%{LBH QNdB8Vq 38Qa<:,6:اZ>XC[Sv6|'Aً #icQa 5+ƝҌ@m:axP\aAQ n  X@l#Y1˱{_ s,QNkz蒦 cSL8E͈.Q6EM ^ Y+x&Fn.ԂvVq=$|Fg[FNq9I-X+7,b|>y߻JZ-}FvyB[~DֆrmlS )TjHҨe2v$Bk&I3>%yZ3IE/Q> <F`v~ݴ&ZHqQ$MU1ug$">CJRKXU8ޚ1WB.Lŗ2GdM i8[+$*A* &/` ĘD!9'y@f#NQD5Io4-ёCׇ$n⿳p[\?E!GRԡ `nJ NO T jvV^dթp}W'7Ǿdhdߺ׌ \E*APƾq=Nځ- 9:Gs}\3N$b.ƈ Z)>B**zDai)m9;#hJH{lj.nl#Q}s[U'aYܘgMM/]e]wsSY?qUH؂i (*( 2toSq?Mbui~/<|8fiaXwqI#Ūgr;‹TdmK=B0 fK]('5|[Xʼn2ՙ"}Ζ:H+.z餬R!\t^v$pCT!izRlP#PO3 d@܆ށHrGYM(K뤋{(폣H([_ݻGYFc5&{Ah7`O]1nQ!H KTZˌ[u%1@)% iɴ%190!2]Jr(1H(Jт Aq(wXr#M"  IzNX[a1 / 0˒J`r%q`)m):E,G2!m"R%,Ls@GK{wdݫwu WSfYA4D+>X/,|@x kA*Cҷ/iA`3AQ )+u +5rw0)y"X:p쑭35ōRWswWą;|FAo5x y {*/ݣcL פoxT`3E*Ev=dF''VD`0!qIa;cD,1𝶀%֎BޅW{\߸gZ`59꽺"ٯ~ N1Ѵu^"zN&@F3 A:ń{ѾK6CE6vƈ!Pb:Tfg\NeDD*6,䕛hMi9eھwcrD$e>Q LD/L؂rmlSS*IDtRptBI!jDv%ҵ ;fN.+F}ʿ p.G*#W~&W_RpW⊊^3w˜xtP,֒Sl;Xd*֑j9j8mln䡤R{ t΋ g %-[b83+*Rn|8wOPRgZ{6_o{`{ȇ-u(XV#K$R#%i%{%JEcZ33{y a$/NM-}$ߕ\*i` R `(;m 4~N^+L ̯rÏvk2 i[Bm1Džڔh,WLlSH_\swC,Ԯ@~sf7dR۰~Ӟp<w~\YR# c@Kk&f5yjo6MxwiWY;>^rfN|L/(Ȕ= 5*RZب͛sg3D^83`?}0N0:V~Yמ(>-&&ƪr~)ʃPf VJT ռJ*QU!Y -D; N' Aԟ3D)o[?]УEcP|qĘ.jFR;Q겝y0]M/u8qԥKU]*`ǽaP ka*bb [hG xC:3*yǭ?c^/`_s[CnӴ꾨*db3AQ) 'VTsȍT6 h^dosX\:N݉9-.\',uN0O)c}/)ɤ5Uc96+VfF?g]I^f9Ehs 'kswc̶Wef9-DnQBP92"WRRUUӶˡ-ƔV۫IMS޺*'cMw(a)vͭ΍;lM63lfSv(K.յ T,*ZHn<'1YG@4/,|agcC-B,cHt eɼ J 4/T $RzD -K|5}, ZP,ceecP k3̋҂ yu־&pjNA hfw,pndZi\D\uh7t>uU<nuy:u߱u{yYMnАo\EkԪG8e`,wA}2!WiVmN%pAVlIqi{w%ťP PrqA:(b6p=oMuE' ܫ=r$evu#\2` \%k &d(c4mlX'X(MtvѬ. [)%;WP.aH*8"9r h+gPI\-$$ 줡,!T12P.p !$WT}En'K|3Uz Nߏaudf@*6}#VbnRFm帑!OD%NT˒\RZezA+ލ($NQ&k- L@h,Fc5|&]z >d&FH @52R58:#*/Š+f4t) f0T|%ieHBLo> F5aVI,Ѝj}?Zo'*PՁa,0ټ[L-VHF%K!,}FCD5[CD0"mg(+͞ )J E 1nIJ 0&;q7bټXT~K 0ZlP,%FLF,[ ib!шQ$lE,TF,]U[Dv#E':@;|֌4`D#2*X}Z%1A/z޴g#̴FZmX W΋u[?I+=,_~nZkdAf&oO&ݿ۶up>o }S\(Џqwp<_Bqep¿3O޹g_#1/d^0O`=?( \}҇q73*~U(cߎGh#w,NF}xμ~bWGϟ?=W_>}Qv?+]/~zgώ~闟/_WgG/:~ϧ/Wt/?a'{gw{(Ew%}y?qS`bEsޛF ƸAsw:>y/ҽγx82'9N!O#ȗ|F?nR 9|H,dxͱ$2U[X5nFzE 1s b)|5OpED(S:p߬xS'TVupexiZtsd϶_\ SLWw1O`jb_C OzM?GM|332YIzuԁ{#o~[͋q  > ?F}R|tpF`qQ}8x9Nrɤx8*?[.l~-5,xx>/6M~go'UZ#h~l8j ӟQt#S|̓X!ػRi |k_\_ė移U|BeP{|Q;RkZ. VWf%y +}6k{0@azr[mWQS\ w{-:Yݣ[ۢI L:\WLn`iX 2d8Y'! L+!ȄթHjbX`LK,<O yoeX@Z WHr`>Q3T3ɛ ҔY&#+1#k9&SBX,JC4֝F6bx! =l^,uQJJ.&z;bIq(yYȷI^{i&s, <<a퐠C۰CIKѼ]VtvBD'Q}2B!Q=F238"H`mʙ0.RNS&1&o;DmƩ:eL~6NmNm9Y0&vd.aO#D KLh-V;AMY(vss9;y66hy*rvR"ˮ8d"GC*)z͉͖oF?|}WB=كDN]I oKDXl۱7S>&{oWRb~I;/((K1S|KE"y`1Yn..-jΛG6s^9*l!sGxtxeÖ덯IY6!n3#I[BdLYĔ^8}w ]:zAw<;mFo ɶ@:o=T',|{%>d}><A|fs4KNfz~_DG}cs 'SuШ|4zُҤEs> }ŘWy; z})Xm@1w)VI<Ɯ۳ Suی(pV֒6k8^Y,y9<@|;0Sg&~?{۶_1aD зIb]4a+RZv6)Q.NdKe 4%Qs!12ff w F$$B!fXR- qvCjZT%g~ j"D߂|!_s Ri n&x%ؘVøV"5."\=X{^h0q8-/6- JkJ7J=w %*"a҉q`M(ueElp=j\H "#z6LmTٝ<1d=ؒke3=јkqk6EF1"8R\j &%Z@ƄcCM&*L{äq"2T#2DQ']홡"q˚@-_&.J RPDx)O)cc qco|y)L%@2եu`AԦWK4Z*Oڦ4ZkM)`8e A!u2]DQ^\3J0^2]j=rnp"eȆ2Q+akX5ZKʵD0 ,]AS8pj4˄T%zăPyچq'np!'Xʜ/. @bt))>K ؐWR3Jskt#IHx VsC{xyg1ShV{}{/h{ `K-YW))Zf)sN7 N^ɘZ%nq" \O>Q }CjϡqqV W'ڢOM6ã V\jb&ĭCܢrFn4,,&aˀT3 X'.1D *bd ~uz3W;p` X& ½8Ar+ Ȋ[C ȃ @mgVf >I ːIB-%tJ :=%=kNtX"OKzqI^=*|\qi*t DH}t0k dE` `NSO0#^%S1naXydۡ < a$ 120yJYwrv'KjƠ1󇑉DkEmynK*4~:vNc-2}*^31B-(yR[ynP0tSL.8Ϸnx͑wͅPe"E;Tf\/7]hp9m'.fs$҄iQPe)K)rnx-)zk@{ #\Cxa[ /vqE~E kXRٕq_,'㑱d2{8/\rLiU|}%44bJeS@sѷ*W(gp)ƕ  Xo]:89;cMzF/]h;_;l0'ˣ@s6U|w8_w \FETk-=;8az9]}?(Ͼl4+!8oQXUNe5B C 0܊|鏆GJ$C3xk`?ы@;w ̚iZH`\ bXXu@0(RIE&YCX3lS!6 EM* A`ؽ)ަW@m[jfM+q+cɧOL]n|\!Vѻy'L<2]fCޭ f,b)#LkE@KE 'SQArR('k zt ?~>eAqL{>q2[R$ޛ \I5M>T"v"l[Q9>[#/;yb 4O[]N*=Vm0Og˩?a$]/f]`Jw~ ]ߟAHq%:_~ܩ锼v*4V _.k\sjp+v%fG5 J[[ay2 lIÜ뛐߾Rw5j$e ៾.L] BX@(8Ӑ3r4(NC-@tIl$Т9n}j !?zv(@#-tВ`[tXDٛ޾D)^YM)epjg nHۻqJ]5y؁ 0Gx3oQkŅ?F4s?~ Baⷼtg2C|>/u.6uu}EgGy2nn6K #k.a+p])lIFt .MGC~ FYY`2)`sOWY eCֽz3[; MQB54+XW$PXڝAv¸(gWQ2|dddENL\>C?\oIB"H}Vzڭ) No,R**[M5[F2}Ui |4%: v xZrx/j]ւELa\O0@c`\QE@̝ixnrcK>5PbDjkm@EHGfrDiLzg! I \'LSI=tPL-JR?aS}=0 :4lgnn)vL?|6@tTgRbY_ `d6u8?t7M?_+ݭ]M_>,9P-̻XL珎n3Wc"C>?X]:^O34=\NUKU_ V{3`ɋ֟߉lN PkzAg]>m`47>P ?ַ aF?0(Fn=f-u?/A{v*_؋e{ϺΗBcYiw3;kiO=@ 雀NOxK0)b'?֡T7ӋU1m͛w+^}ۗ_7_5}_+}ѪzL`({ix•=Ndž2', T.O#6mq1n&~=>rq3^1$OCs K^wO <xJdW@bE)_␜Bse)9.(Wjx?Z<)Z 1@r@1Zvs9߄GV1ޜVns6ǽ⼗ h|N\pqx$n0RMs禾H6(c 4a#T=O7'~jߏ~Xwxs|~/z}w@E^{|KM60CN?UN;zַ߿vg`;ʏazW`MޟJ(_aH ow\v 9#RhzF%ğ{W 5$] &x 5ϼ_*,.H c8|>:m?nfz/KW^5'ڎhïqJvB[&S+4|3F9Mf߯(9v[lJݒDIJ.V*Bd!oooIRabe5Ԇ|7d KtuH)d(JQ|)9C$ 9D: E2rRY*'qD` (V!f K(Jd5Y I̬xPF7ɼl%3MgOaф b71YSRA-D_ EB)SZYx,:wzQ{qn'CL@)Z뒈-&fa?l|dV}AGdY jA4s3 U Q*Bj>yedc* 1ʎW^@bNbU:ixM5GSwZoe0R]vI;hYaE݁luXM5Z%5<Ekxֱ-Ŭgsl <(TEMmw+l ElI8Y'B%GE!D)͡6(a?BMHP¬ h9c&-e9P#?Bl E )ҕ;D=0|*Je7#~1 jZӆ5| 6 0(L*+!Q kXJr| ai$c7Ll4/&tҙ9f|U|qVolxEftQ^DۯMYDj YD9Pf/&)jԇ6kRW p.*]7{?bDl` lwu2. R VPG {aqNktб*].916Eć#,-+(l<Μ^YS$PfaT&|rOo8U$h`c4%Z_XY6NEƕ)ʒ{Y%kP)qHQ~8ҵ؜ʁ(6&ay3yI'6_vtNҭ݄PR1qY rU?b>(q^Zs>/mTlx%LY($>`$~`q9㍡FoxMF[J"5s 0<( +Sv*%ջ?Tcl8,%" P5bfICVdE)@1fs_iE 5:yfsXP{;DT،+l -2+w.cޝ< 3rE?Yfu^ut57*e/뼔Zc1Jx/?`%_W?|>~zy,>]ߩ?GoU595ﰽ[篐f|?K!`ڰ{|tlzZm.to=maD㣳kLj:ƖGl/YO[Ko2c7]W/ǷiXxu$[3׃+gW/Ar O/} 4o'(-B3oLf o]^e㟾Ac?|:o]]~|No8u)GX_sdzkkie&6[(]y cKkKUAFVob=-=pB$Q DPg"`YY^t FR ڦ&,:{%>Uָ-*Vû0[OdǚCX>$o$}MTE' b#Hz,c"aD.Q64*1䠗iqMXNlԘV0NgZ"o`oFlDSb- kX<^o4:pv˲jȀFl>VёS=ώ 5Z{wе4)j-ϘWαzll;0B9ќ͟QՆbP Z+?g9z- )`+VD%H*1wQ3 A~~LZ#ӯ}DUr"IALCV]la,vt:>Ce*XY[|D+5mQwsίUȼlz<+[Oh{?dsYz {׊'ًey;>_ʻX׍n}{!Lm!>a s YtKxo(^?:>`=B#ʕ­pf"XV Gw:9WgMgF y?O9s+1ۨxm&G?1'GoNޟLWI?OzCOx7X-W{vJZ.~nT\`'Ґҏ~zF)ϫ;8ML+ث\co<H"wyS_ م٨#u 7\[лwN:;!0+Zn[R#Cp`oSi͡Fz:2e΀# M ah LW|ZΝp-H^^Zi 3î&.WGu&a%cyЋx:ļ-4Uw-ȸ]h*7f[d=1K 13E[b eH B{t*O(݆⠅u>ctl;nëc YAjߓnqO)⠉u>c-dYi.in#p7΢ <yJbV=Rr NuT-}p,k +ƲyأmeUo8Akfjnj59M8k;FQO+ۨ od;+ |`Nm6v@32ЕxốK-~| bhwocm:)|ӡPmSJ4Nȝ?䧽D9+6dnЌKQ mj;+_Zޕq$ЇݬH}=Y8a !i1H>_uϐ^R %fdLWjTfCw,ΡQxsM""Ϭ(\D3h="զ ʐ\DWdʱ&D4Bp]u]1\WvS%m3! !TtI3dKmz +פjUQ+8!l_H(:v 4} _9řHP'N#-ؚ?nMv5ZX7DZgjgueueueu}Vg˿ t{ B4oSoq|PLd& /r*15Rr f{*}ʩكDg!S" "֍81WgMg t#0Ŧ;g)dW'K]ۙG.VA%Mn.E$,,}5]G*ԸjMteI˰u%ud8{#b, ,BdnJ~!љ9%@S}o& (F2PJyS2mujzJ!H2jdw aۓ!/ ÞI ﲨP6}Hd S7^ E\+l~յߝFq+b(,@G^|ga.9&%g)C^V`1'V^ BMn[ٕIr}#%ۓrXvvgkH1QD)">= -xpLxj#\V=j5 b\. lrhUAlA Ҟˮiir='eOq(V2^* *\}:LS)"Vg= 4g,hf&ET OT+U=mYyY 8VȖxX^ban~+CDyxVfQQ#!/t"#X{p#:D:|.VQ56rG6GkVH(O r{2AҫV0t_GםrgpxTGNòЬ~[6Bjk;Z՜{c5|P|E1&NjVj uڬKK]S:o0+,Wr,W|"ZH>xK4X=jCA{$LpQeF6躭׽ϑ MlgFnǑ)=8 Q~*I m4yi0j-*~,Q|;1 m޽(d!3Is&uXfΙ՘G3lإȲTb t1 l{6 Ǹns:ٷ?MM@hz,Pr>lVX梠(BT4!ylF1kkwMS"n`#xK(~94,oܧNa-S'2b5IHbCE> L.fz PǶiQ٭ВTkm%V>5J_`rպMyMa Q` =}lM"%}Q=*" 8Kg } P4?1`2:H\{@DY>s|o2׽cZ|6ݿN&O'ݿAv2>o4.?#zw SÜg7Oc;3L7s4nW=?$\U/`^mK_&WoKt720Ma07|p$L={opMt\̾׷fRקW{q7)޾?ϐ\~o~\?Ӌ.yu?/mΫIwn3m&{7׽ Yg/s,d[ |ms4cbпNǷѱ``ַ?#ÑV:HN~=:|O@cm*spNrgd_t6W1XIdؒ2+2'~굓a H(kFn\4 NFz'0~LOޜQx2nN.f>_J`5эx`N559YNO}Q ;m]T=_'# ɧw7OJɇ67~/6Fc.ͬzSJ в*>7\Cއ@u_͸^Quop5Q꓄=p## ߆d$+VށYNti?2l͂V?#Xh?zpe >qO?owߎ/H7(XwűlWj|mB,P mRaeV@&fwȸ F:f[[{ڼ-cWs5/wk6n{ 6m xԘ= ,U#RPN5u<@e;4G 9!M$P-p 7r)"X("p;xYY*qc;f o[T^ݐ(q'm9A.xU Pc`tcmM]4''.yMtzг7/i˛Lz7HXDfZ~ te=/XLMNH9Hi >A) ^p|Er0Q]i+nX L%q'[:>Q ^7gܙ$Cv"hdC! e $9(da۟YŬ$80+DգMX@h܋>o|:Mʳ>TQ>ٴm'+Қ >a hi0 *sA*ʍh\S6݇N Ĵr7ؒ7 {?Y+Gnp%K߽:[̑1~|5V5oa#V[cˆ^ƖF,S2hK[ag`w~~%GOy0%hr!I^26P\v$ wLp'b̰)@r$PJTyq 0TX@$rǑ^X.%ϙ8 ɇо /c,E ؏BŵIh,ioFlI_ۛ'skO}fcD?֨~m2\#84Oa2KFjwxbP4cw_`qM2׉hDcVJ&ӋRڜɚ[?I͵ZI QZ: $pUxNS/r)-dk,{[ +DTu"¹AҨ=O: ڞv5SHsڼ)}wѵ#ye΋FJVZl+ˣjo,3-9p3Pο9.-r~fѢ8sjs292u&Rg-{c.cQ(Eiј%Mb[dER"iԖ1*MR-fD# bKwik["ܑ18/PvȰjXbT"UB.wQ-(ӘI908ڭn!%T`)͆" ȋ(= BDꬹ0V*oabyM uB\4Gj;-ESXpIX8ۭ:`! BxR,N}8%KF.9LQM cq%3UbV-hC-8U ,7oK };6vymvEV8nsdSR՘Zۆ-ySXJu$2ޘgɒ7ӫ3PxÜ$Κx7'07f1Y@G8_B d-3Nhwϼ~&33H6͂l$oobqߜ =Li\a[A5ζ5vڬ~ {oyZowPCʃ <܎YsED<05V5R5(`,/Z+Մ`sOX.&nRM b&vB0HӖRMR=&7AE0qD h6 olaQ(3Oc;kFyT<ɒ !Q NFC!CXO:+dՅ$Y ߟ2tւkHӖR툷YRwı.ng=o¸Qx-}_.\WeY/3;; +]wݑX6S=iEĢT{IYiT;/X6Pl\rgxhL#cy"* tk!"nMe{^BP!#Xgw ZoAX # xx#T;"X`*ZbQXzg=N J ΫYKa)h1Q{_U9y]봶E$Df3!w‚h r$W5`D-ש (R+䫘FЗ>q2/1WZƞ,:YT*OA:Hl.{,̯eX(G>)#C)Qh!b#!ɨFO#GA drQ;rQ7boB1',R0T#`rc7x AyQG \ \ ?vT%&N.Vs,Lue:I2;كI/_o$ۛL?u`o&p?n#kkk<aTM}l\Dfo,CG=C1a#!j?%n`:N{}֚qvb߾XҚcTG }!y .#ٷ' <j2D$ښ^ږ/tҋ@oI1*B!矛xB` ck.O|v3y4boγfĎd&`7350U110fOC|}@s gO]Z덢^z4R9h1+r؊|,[.ߜ=tBۤnJpf=<FΝ1w8B(4?wW`e:hf9OƼgXM%O7N~"L&yp^.XW.nFbR~Ҟhr9gzЖؼ-4I{bL4);Ϝ}`F))gBck`,Iݒ슦_g:bwhWTB{6bC T epFHl w@PDuWY&SRut4VpSatIF/G 2N6S'4ө;FQ+L4H@G(,Wl2zTo"D+ZX{;r.-99F=()6"o7"HNG0} R-H_a ayF{ƃF8ͺQvc xۓW3ӚqoOwcs iBgcg2:&+g!@|&u|\驐V# +iyN%0_j^k5Iv>FamMv I'dhf 2Z5y^M/EMdvv^Mz"ۻ_hDNmabuSYZZv?N/7Iϣq}W6O^#{o~6Nx>]Daԫ]g C\?z;#½yOnɘr_mop^gh8 f'51A/Ǿ9sx76%uk4"OD&HqԑWSddF͵}i96&g*Ow9w .&.ǃc~JFgs Q7yl W&?< ܉9 F8x:~%E)Dz.3)J1KoU4mD%>>;pr`_ZcXKCUgd~,/ܟz'[?ݿ?| n_^ޜ;aDaʃNo9$ݸ"ylDz~c$s_,ydma<_;@<-]y+:m0x1׾^@"&QX'8vXm>e<0˔6 ezy=wWC 9!Z!kdَd@!,8~=q9c˸f}ks8 PrOԷ-Q|ӭdhցY-TF קx!!nQ Ւt~a`g  t4T4抒j!c7ys[Rc\n$TxxoѰcQ~y1J!xG"yGD8WsϷ\mMjI9PmI,s @›AMF{u',x~ %jޮ"a $Q5[ktZY*IsRh`~J Qj4ZumWVΪhs&Rfsmwb뢒Z:.4<_H|0RXc"BE>J"Sp.}Ln/-S ά81nYҬZ# ̪$@1Q08@,6D!$f>=O@ϟP8P4Č~>"`k@+a,DL12~CCȚܶlcY!:eԡ4Mϋn&D9j*遾 WD{P )c 1Z3&DOC?QmwjsIte]fk! dv,eh%#)o9!+0z6i|YW&adܙAD!rsO) 0e8Oмg7#$Lǧw'?qeJ"qzgS51TYuК]1Fʾ#teQb/vOLtƩ e׶\:I&5058[1a*>?X>e.M4ʉ2)HCs-S~n\]@̅n~6ٍ aU0\onO?zn?^mn3w++±B6,[褹ϖ Py۳T:I~#'ȑ;ʛ765xl<0w bAJqRK7x`(oSMM\ kؒCBݔR-5k7xP&ojkiF9zpv< P Նϭ0nC&0šFqQxPQ=AX!S0bO=ro:"y 3XU=B71BruDakD`,#Cy:yb۳rAEKV:E\BK6܅9"hgD2PQQ\f-eN<K\%BQ[J]wJl7MYlOV>֔t7C/C`BǒFGrpYUH ]v Pv\JO(udW{5(i 6,/QZDkD58P D2"r?C˧#a dMGVY|soTN)M~,v!QO#&u mBwn}%!֨ہ9lF!@u/5k^;ּvy]d|x7FXR)ׄ'E(0(OclAIbYBxDc jxKRjM>29F|],I*?|;T 7qcpC9*"#%iU݁ƩRDZ & µH,a)}JuXBYcRT|4\>ˮ.V L &[*9yJI5uARDք"s t t^l>Ρ+q1BB}8I U SmEF IYD%hN\A8b UR9HJLY0na "OC%ZDsĠcb cL(%(jZ<ծQ ^]G"'H52gњ?vqh?)l @xL[)'B@JЄW]O5!IgO#B@TByW*N6>'M?R5әU; -ʯ յBwĤE̹֐+G@Hc!KАkN ٪u5R lW"$`<&|PakܪY)32@"aBwFfvݑSy?=z+ˡxCFkd-rx)U$[{Z, FJ)|vx= l>/uwe_+S"R] gU>m%ްReg{Y3`}+EwWSn{.ϗZ|))HlؾrYY^Xl.b.a6C]CFXEo](GV3Zn Zzv0y%1&f+ !"ꋉ 8&z],4F!a rKj<}0n@eA+g ],|Ggi%gM 14Rwdes]c9̟B䖓,JҐ'@mNa!~e̚'eClFibZ{=o//*lp7lcϚQ8>xYr4kQ8"ZjTm#ߑom0YK 3Y gKI0F1|nvU^bfY~| 8wN/T}F6s<ۑ&h|00ϥ6Mgvw?󙼕bPd~MwbF@uh1Ǝ Pfh[oE+Tϡj%Ɖ ݪ u>t۟0մ6VAt|,Z(oq.K7AAni+>G/ ^t^+v|,S&vh94b(hC!# %V1OIwj U(V3sJ̋tQ[z0'lC>1n=|ڲ4>ş>\gQwчÛQf8/ǦvMVo1^tӫp x?]p 5û~qd~r2(.Aeev^{foLl?XP wOoqxʭ2rt;,P$UiGrYˁ >5]/|//߾y*_. K~痿k˷?~;_zm w ^un;w:#]ӹuH_sLPxWۯf;c^s{Ƴ`[N~n҇ڑ^oܬ:c߬"x(BhWtw{V~|Vs5|& .^uz2>V7靏%3>Pv~u?n'ҽA-g2=}t͌~N/=* >ܻ?o/oƀH;5ݑt._q:+/p˙6 {O/2Ȩ fCB RslUq \q[=j Vqٶ,j-Ğ1yϣWԵ7pRaaX%sk q틢xG5:>!9y95-9*{i4pA;cۍ{+&+r_qdhx0?;}p[m%:0Ecw;)UdfhF͉LQWӌX%{[+R[>@q&~.B9[ $jW^;x9ZOQVKU WŦ?i!sWcs7vnM3}|h'goflZZiLEf>ԻF-Vq%j#O-Yދ:>|6<0ub|ew_ a ?9gjwX\T|@HEUYMQS+E:\7=\bBÐ}ZeC ]wbts oКg| ncZ Ϗ=0e(6c.Ƿc3DxA=i(|[ߊ0I AC6lR[DW@ZR~ 8[Bz.ߧzpWl烹B0$(a:(f)RXgJU4CTk͐/s>wF豼j-g6E-,)6iYJxI3 $XC T"Ñ13,MS!"JjK3$ņ+i.%N0M"XAP2ɕ5F8ţ*ÈQj26j]^m~opb:w7IflFVʛW=E$~A.^X,}vn] AD4i5x9Ȯp4DѪ8)5rA_7nh![rD v|z?0\(^%x"HJI׃VqZQp-% ]wՠS@-JA{%D,RLl4i q[4EJMTB7ͺLl<͸.t[^ԇQtMθ)o *@ЦՄDimfC{ )![$տ]kqH@x 1 vtDQ4lGh|Eq;KGX!Q{-Ty)lLv:`DRI((s$QnS-5܆bI(%DY"}]\xǚl1L(f dU*3! C,e@" it;e $][o9+FvK,`8-^medّdr%Yn;rd%YUF*Y${7Qwp p{GڵOPwRzQ>I;IJm a#G?ωVjYMr?O _Jڽ8kXwC ]hhT5 8wV=RUO;wBrKVQpGI`wgWUը]dKkp1B_o"yueȵ_s639WAB?suRol4bC.zͿo~} ӌu+v,ygK#f48t~ܜv@|jou{ sNqE4{oOjהMy9r4URmޤdcϧvj:[zφn9sV}NGWۧPWa&v{jIC)X_%7G+ԫ@TuHrs}v5X~Y7.i߾zu筨5\PB\tuIEQrq>pGP.}nUq}5Qb]#4aOy/0]:}Q(B{mxT^lͤ#x0 &딊l;w]Үᒷy=0S={֊%a%tOq ;/qǧQ*0G^7 B~Z*(,e {VrNH_jb f0Yz#*S9{^D٫\q^e+}k3%݉~w&X"C."¥\Ȫxe$kw9.$gt{6dZnHjn_쩝kx{W:O=<^jͽ;a{=]89k6-e rNmDw:mKv@!p vmp}Ph|Jc\+[ɖ,}m'o;=,dk;~ۉ_zrzo(zHvlц\0q؂4 ƅ*6wZf2g :_9L#Ob}]lٜ(G. JȊʒ!e L&:ŝ*ɕOdq^k ՑbNj!JnNbߌ5dSȬ0AP1,uD"ϵ)4Jr+ 9aܩbGt(0' WtF/?gV&cJ/K%Hd+5l1DXe0T  _w  q`Kzùc%lP({ؖWѳqn$1&Wik_< xLP <_79&)&fRIT3% jv%J1M87y6F(ܚ1gIB))Ciy  gVfS?&|J\Պt*Pԙڞ4_Fg&ANM|~6"ki)J䴧BHHҀ\0#<S=1`tgT=KOcfw'ҖӚX*@B@V;Gk/h^#v{Ae\k%\؆7gLQh-+JQ0@ldvQ ڣCKqBpȲTg)Mp]+&3_*HOg$Y@ɀkE#@pҚnT;cb!6 yA%Y'O +R$ /Ʀ@{kT=K;'*cuA$bb9#PSR1<0aj1_ԻD]E`Z}JWގ Z₁0AVԠކaVȒLI`dEf5t/cQJ@ъ;D8O#[`,}HJh%LZ  K|i5 %H!4JͬiuE~0a:%S'gH90W yR) ="Ȁra̴L>tEb%r3,`.H7 LJcl^Eů}/~MdzP:Rt_b~};2kQԎSFr#EI륞ymvoA@(A4o3:@ӻuH])rWV E'B=ş<'Z+Jf&=> xq|8VZLaZ\YY Sw@]gC3m31>5׼DN3c` }b"wւ= Q]q!X{k fq[.G%3/lgdaI$~ν pCJE}?BAGTR09SΆw^b20gΣg<$ekyloxI 75XZyu:ºB9F1yBQ$CTRiB[7)p*r5K;u%B+O_7/@B*]6ghLQR~$C&N_Z+ xG'],wuR;8n֪~] $kރ_D Xe{޺̗qUh " װ&nx`Z]3"J; *dR2'GCŽ L%-HWWoN4GM)ST\x7҃4^?f8n$IFapJZ̍LeBZamU:U˃ Rjsdۙ p{ߣ4,]c깎w) b+%$,^]b?";-^]Şpvkwm A3% ߾l77Ͻ00!t1̣u*l 5(&8*N-މ$Tw=ϫcFF)<++t(}]8p *s .E[wpES57E7]c{ߣlZGf g &YjUo.Jg~闆-k,J1N$hTi:A'hMLpqaž|$yKW%26qW(ڨW{T rd'#}ԎH .Mx- ֋uxzͺ3WjgKF`BӦp1OCJxaEZ $ Y,ݗr`>Xj6B| WCёarrLgM1 kmVEE@?ݢh(@b+֭$pWV,cBGZq gpF@D4Kuد_1&Jh-&k{å7x4_TS'5rz`Oo0/BϘdBn }w<tz ɽx?gi(b'CFbYp!ւ= }8ﻃ8TM.3w&eFQJ 2#`/rk:o yLT-w*HAE!pVn|k3_>% ٴ7gk}t48sq#/0ʰ =nmS\؉!B4Aj6/>?O}?Zo+m`+^Q1ҙ&UjRɽϢ2tɕuhv*/(UbFBY=I>\@I 1F'Վ{%Ck Az.RV/evRz*eOz|Qk2)?2L>ˀ&ʶLgl=%*M2$P&OU#J+RtC>v$pTn)K#\|L&/s'graA*sQcDNdh̷$[{%K_wsSS`*;JQJy{":qA帯:"0m;' "*<("L&_C7]{ClBLODʐI"7 pj(E [|%]'{s4<8x>:wR7YD 1%@[ j["?EW=%Ln/Nj_,p}<Oi'A2@ͧ 10HqBGb?-N#{.#׈H;TC"n` :_ȂKNDok.LLd=ta<>MO:TaMʥSe56/s`KF]cձkI%ۚ\q˴2y'|;(4XC+1_MЛٯ^ro9Y+e&KwJ5=*2 :jG`y ;MV ~݅WWIS<ﳮ띞TBnH%O >i2TZr0T]$K(6xCOy%t!7DQ*H `Q}xx(wW!Otط=n`:Ӯ:&x8t 6rh/g]&&dYsÉb)2Kl8g iiuH\(#>x`7Sl00h0qpŀ{Nch0ڭVrm_krYn̾N4yU@n'? )`)\}fGC.F# m+Sk{Hdec$#"78su(fBS LUz;8XQX?uھ.v Wu Lݻɤ7r$>oֽL1aw9Oҍ_paǟg:N&`w?lPu>> ==_bt,,v[tuu0a~01)uPzt9G3qh:X<}jOa?%8f M<˿=9M.x]|{q*O@_QFWާ;Ͼ}!9M'M[&pA%eﱕa|?g]Tj;bteI޹0uYWa[oM{E+M3ojDFΫ:tԗ) FUI^Zy<s;iv/:KUſ_{s-n='}rMM+E}hMߥ-^}j_ aԫS2<\ŗo|~fG6CnPLJqI w >)Н&o9X7^ iǜ!WDJ~NB'zM,H7xx>/~.gM~gw&\<~ht0{__O#h{P|>N~'8mv43_Vq`GB7Ðdk12^^Prs+r ūiv7I7n !XHJ 5[{7 ^O m75,?YR01=+I5TTs % {3{8Ҋ|UweîuꥉDڛ^AM{޻#ТF)%'xn K!Y:Qx6_jOMZeHK78%ikE6?7aGJ0X~NMyggaڀTd&(."2?+`G@1T[4%$yz1lL@gÏW'>hP A5 FSD2C,9:5% tM0xAubv @j p5@zPtvt8KAnQź߾<3Ķ?¸H#x0 1;r39?A g-v3fS|t n24l?{w3̯Uޖ}j"Эs.Wcߵ{1\ :fkpV<ܷjwιTtɜ+9/}@lblM;sMj˶GɜGsFtvKZQcrNm=0kfn7;k-v Lu1%(%yL"Ҕ SjIUE*U&Ҙ("hvx*LK,*2i_ޖ $` 'ڀy*= C4ά+*F*"gMR)ׅ?=::<(QwaVNO`=_{7xSFo0_$u܏pslYcǖ|0=Z9cQ80.z\1pYFAju0o6^O[7V~<辱$Uw,G7T2E:ع"36J67uFi0{]Q}^ ?_OW:(2N lM˰kK*黁v~6FN.K|7҆.չ>mNME~ĕ5TUH /Hm4f' a=w;8S~/EGN*r)t\/O³k!Z_"g$*czQRfG&":-'rKuO7E k])YtMN '(;jɬI9l+Eukl XnjL|- ad*|Ł} V"14"Y4; ^'yZ:έ|5sRf.;=a:R$FK>*6R1JXL?śu>{~޺e~<7f=+N,2AO\[/B "9`Y)2ijt@LF) gspRh'O")?? S"*B ,LS4%@IR#} 2k1ZgnUiLwW䖷7!{~֗[QK9]i|5`5{T_.&f Ó_H?Jdw `e}p`>jD a,P ա(W:R+SlWA!t=t4[M`Ȭ 5RO)sJRٽQ[݌m;uOm0O;YڈpJR"DŨģ{&N0T9sv4;uլݘ&"0Еؿc1 FLA(zg9(C$=^e˺f)n=J1Vť$A#Y4GE Cs&B4&.lK_#՞y-?7rp3ȭ?sq̛r,J,ҚJX~H'oƍwϏTI>Q8^t#NFF޿{}a<.vm;y!4y}wy`d s =]v=3^sxїCjfzW F8Tp`!yzx9Sj#OnUZw V5u r V?׹cw(m_2f699j֐v֡}1a)Σl杰82r.7eNT_uHݯT/ \Yi5-5Pk,{.8ryzyyOQ^L9;\F=+`j3Ot|CJ_lxH^|-HI_b)I f 5;5c4?]dيx4\3N7cGI>\RrU8 -d&`oM+~MێQ;UGqsp5Wh?r )6Lʘ}[ ُՈ+#קnz쒰`}ͯEXBW',_p.v/DR\ߑ)#%TFi^kՀ5k;Zlm3r ^W@ In /O{ρ$qQB $ ՃfN/} M| 4|d>p,XWV2mW6݈sK2A/ȧxA,c>8b ~6[+6xܭԅf#5#\^Z) ^nr qo}$Ex̶n4숤_i˒]Պe\7Ī9/KCC5ӰZi%G Ux53\{`hrB'$/.H8?d%"`e(36?FSljq'qa"9gcoi~"Wl D_%Fd :L B1VO \ee ף‚Ӽ'tH :( \*cqi-h7iD'\΢(7I'89k˄NfFծ0e= 3V[þZ j&4 ֧)@ 7k P>[YZ Vnl gENQFxc?d9 }L!IY^ ZDa?MSQ:Z2AdlN0(=Bb= {)q IDZ7@h 8=ւ=]cfܘq<7О &E, O@D@׻nv[~B؝wqo5h7Y/)]^_ާ-2e;Űj};,$MZ٨Z}B>}4D+ Drc=Tn}J;)lߎYۑݵK;[sݨ)]&wvMAˎztEMnEj6mq3ѲDDgQ$|gN&1OO\JisR983k@pM&7k@ƨ FCf{n5B0vρxvk[Ƭ "Gw׏#Ӂ0:"c/cB4upZsϥ\Ůd.EM eA"D* `qKٵ =gf!KNl߀YՕ=`)P|341-Rpqf+iZ(O%GMN%eĹ`i0d"%MgVRjmvĒr0`UHɽ❚8Y(7!We-5 nէ6 MfkY=E*kF,k&H/bt'?G2պ5bijjkYWV{RX %&p1 fؠU 5. >(h:5J!9PK]|r:"Yin$R AT-!dV|i= z%$F*`Ue2$VF7)L6Z+)Q@ u&(YVF Z9C D.=EpżU^hwykl0oD kAYI%ݍBOfje+CAY)A%1׃{ǵ\p|KZr!hQ#qyj4';9}a/tn)l<#Kj)ro$.8W?b`៳uݤ?:^D8V H܈eĉΕ  OR_}ÒO$UBAʥލ *84WnƗD/Zh8'GpljƅsZu,ʸ(;)`%0X)+oerCYZG>.?_&3TX$Nv-_JnP?5V]Yhg@UXhKZXIԯ-5~JEޝi?rwWziTR\Ц3 Rd҈Okr-H&Q/~N__񄶂@R*[_kn5pݚpߣO^w7;U:"? _ZX761,u` p@c(Q6roitYaX3PQ`iD>R6QPFSp:vZ#K.妐5SW$+O^ʖfi[;۲mC)%H,5Kd.ƪ V|'G3Q(:o.Ot7?G Xyό&CUIt+纼^iO)'zF:dNԯq2ΧlѪϦ׳m5~U| # z{G8Uln&4x.=[,,VkeB Ȫ P1@L66`qցO>kj#uU%~<8#엳7}ems}) flqwf.4oWs8N%tZ=|pVxH^^o͹|[2y׆<_?O2 GPN{~ G?? l#- |2"_@1]i#r%6r^Q!¤>޺*&9CS֔JuE]]S1 =m]ŗ ؾ6Kfo=ޘBX] +!\'v@`61f6 iu %͛˫ubA:m}򩓉N~=כoJ{F_aY[^ 3l`%(%5&%:HQH$IVS]]}To┮Xj 蕩DNgLSj׆zlh<ގK~`>K_7|ohQ'GjhYLxByċǑ8yTid|hAbS:kU %$rKy!QQӱTVC-D/y82gܚő9hm"+G]`E@Pi,Vv\ @h+P-9le}:G93MD,>5H@|opGr)(M#3GY 'Ѧ)m4/ƻܫ]-3GzY g|ߢA;33Am"x7 qx@U,CXIbPps%KŘ;LgU;W1dp 9_8,`Y9+8KdLI)a_א-v@?wU v}b<h6R '&LJ>wS*H&r(I0) BI;R+ئ'y(oinOAN"ak0\|kq#S韸㔉&&^{Lg}/ KB[nwʈVi\[2 l9- T- mo)fȲظTY˲L_]QhT)\_ҭ-sҭu(.&[EHeo`2,gՎ,Z\vUI\KUL3U8N`:;u`u2TȰdsἅKC_\idèfljN5CDM/|=NBxŜTOԭNIAޖQ oC&2pISTE2imp|0+/9fҪBC޸)N~n&M6X2pQ(b6}IMI}y˟funUhWR:cVs޺ *}GY庭[$GVq)\B譈-B]srWQQq^ r>˹gFÙ'Y=vN p '}k_{X'4}$&/h<#2yE*9R&+BNz)]2ڔPO/&[UI|4(k}޵aނa drap>S2< Da?=vSpeB/r0_ߛxZ0f*9|ē(sG?&9DžI֑,_8]I]Ev׾WhffB0v8vdt봃%f4TeM|,bCqwu_p3ps1hn% SS܍'dGF`^~- c!KMa9ǝN- 5SG4+/oa{ C;ǃ arbDs3ҥ߯>c;fO]iv*:+kL,xӔiOu:U+݅UZx-WAD5-43`G1ӌ(Z VO< Vρ !T_a[W#`δdtpGzyS f~m?)l[qZZHS8J]4dmli`NS3^Tya4e(Iʧf"/dne`4ڪ18PQ[ lm)#rUy.Ǽb\nSiUPDro1v񲢽N)]-tnuW7ֵ* h#z_LYyZ GՎ^ Z@4DڄRn˝hieV2 KElMdՎ&2Ÿ"nS5E\0TKԒ^[xN m;E#Xlvf[SXM;sA]`![]S-]vin65:2H?]_ܪ4/8cm=ު60Dh ~pob5-%:LFa4A-03%fz~/;Z]OśfD 'yfho, !2^؏Q2 Ӿ  NL&B]u١CBOvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000005045431415144636537017721 0ustar rootrootFeb 16 14:52:53 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 14:52:53 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:53 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 16 14:52:54 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 16 14:52:54 crc kubenswrapper[4748]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:52:54 crc kubenswrapper[4748]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 14:52:54 crc kubenswrapper[4748]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:52:54 crc kubenswrapper[4748]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:52:54 crc kubenswrapper[4748]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 16 14:52:54 crc kubenswrapper[4748]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.702418 4748 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710009 4748 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710042 4748 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710053 4748 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710062 4748 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710071 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710080 4748 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710088 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710096 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710105 4748 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710114 4748 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710125 4748 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710135 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710146 4748 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710157 4748 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710167 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710184 4748 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710222 4748 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710233 4748 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710242 4748 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710252 4748 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710260 4748 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710269 4748 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710277 4748 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710285 4748 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710293 4748 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710302 4748 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710310 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710319 4748 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710328 4748 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710336 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710345 4748 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710356 4748 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710366 4748 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710375 4748 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710384 4748 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710393 4748 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710401 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710410 4748 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710418 4748 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710426 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710434 4748 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710443 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710451 4748 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710462 4748 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710470 4748 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710478 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710486 4748 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710495 4748 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710503 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710511 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710520 4748 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710529 4748 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710538 4748 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710547 4748 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710555 4748 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710563 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710571 4748 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710579 4748 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710588 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710596 4748 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710604 4748 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710612 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710621 4748 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710630 4748 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710641 4748 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710652 4748 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710662 4748 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710672 4748 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710682 4748 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710691 4748 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.710699 4748 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710877 4748 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710895 4748 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710910 4748 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710922 4748 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710935 4748 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710945 4748 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710958 4748 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710969 4748 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710980 4748 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.710990 4748 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711001 4748 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711011 4748 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711021 4748 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711031 4748 flags.go:64] FLAG: --cgroup-root="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711041 4748 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711050 4748 flags.go:64] FLAG: --client-ca-file="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711061 4748 flags.go:64] FLAG: --cloud-config="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711070 4748 flags.go:64] FLAG: --cloud-provider="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711080 4748 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711091 4748 flags.go:64] FLAG: --cluster-domain="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711101 4748 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711111 4748 flags.go:64] FLAG: --config-dir="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711121 4748 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711131 4748 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711144 4748 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711154 4748 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711163 4748 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711173 4748 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711184 4748 flags.go:64] FLAG: --contention-profiling="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711194 4748 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711203 4748 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711213 4748 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711223 4748 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711234 4748 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711244 4748 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711254 4748 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711264 4748 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711273 4748 flags.go:64] FLAG: --enable-server="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711283 4748 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711294 4748 flags.go:64] FLAG: --event-burst="100" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711305 4748 flags.go:64] FLAG: --event-qps="50" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711314 4748 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711324 4748 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711334 4748 flags.go:64] FLAG: --eviction-hard="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711346 4748 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711356 4748 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711365 4748 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711375 4748 flags.go:64] FLAG: --eviction-soft="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711385 4748 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711395 4748 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711407 4748 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711417 4748 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711427 4748 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711437 4748 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711446 4748 flags.go:64] FLAG: --feature-gates="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711469 4748 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711479 4748 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711489 4748 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711500 4748 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711510 4748 flags.go:64] FLAG: --healthz-port="10248" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711520 4748 flags.go:64] FLAG: --help="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711530 4748 flags.go:64] FLAG: --hostname-override="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711539 4748 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711549 4748 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711560 4748 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711570 4748 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711580 4748 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711589 4748 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711599 4748 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711609 4748 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711620 4748 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711630 4748 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711640 4748 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711650 4748 flags.go:64] FLAG: --kube-reserved="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711659 4748 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711669 4748 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711679 4748 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711688 4748 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711698 4748 flags.go:64] FLAG: --lock-file="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711707 4748 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711744 4748 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711756 4748 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711771 4748 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711781 4748 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711790 4748 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711800 4748 flags.go:64] FLAG: --logging-format="text" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711810 4748 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711820 4748 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711831 4748 flags.go:64] FLAG: --manifest-url="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711841 4748 flags.go:64] FLAG: --manifest-url-header="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711853 4748 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711864 4748 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711875 4748 flags.go:64] FLAG: --max-pods="110" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711885 4748 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711894 4748 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711904 4748 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711914 4748 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711924 4748 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711933 4748 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711945 4748 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711965 4748 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711975 4748 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711985 4748 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.711995 4748 flags.go:64] FLAG: --pod-cidr="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712004 4748 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712017 4748 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712028 4748 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712038 4748 flags.go:64] FLAG: --pods-per-core="0" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712047 4748 flags.go:64] FLAG: --port="10250" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712057 4748 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712066 4748 flags.go:64] FLAG: --provider-id="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712076 4748 flags.go:64] FLAG: --qos-reserved="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712085 4748 flags.go:64] FLAG: --read-only-port="10255" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712095 4748 flags.go:64] FLAG: --register-node="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712105 4748 flags.go:64] FLAG: --register-schedulable="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712115 4748 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712130 4748 flags.go:64] FLAG: --registry-burst="10" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712140 4748 flags.go:64] FLAG: --registry-qps="5" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712149 4748 flags.go:64] FLAG: --reserved-cpus="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712159 4748 flags.go:64] FLAG: --reserved-memory="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712170 4748 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712180 4748 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712190 4748 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712200 4748 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712210 4748 flags.go:64] FLAG: --runonce="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712220 4748 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712230 4748 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712240 4748 flags.go:64] FLAG: --seccomp-default="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712249 4748 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712259 4748 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712268 4748 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712278 4748 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712289 4748 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712299 4748 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712308 4748 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712318 4748 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712327 4748 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712337 4748 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712347 4748 flags.go:64] FLAG: --system-cgroups="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712357 4748 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712371 4748 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712381 4748 flags.go:64] FLAG: --tls-cert-file="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712390 4748 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712402 4748 flags.go:64] FLAG: --tls-min-version="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712412 4748 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712421 4748 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712432 4748 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712442 4748 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712453 4748 flags.go:64] FLAG: --v="2" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712466 4748 flags.go:64] FLAG: --version="false" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712478 4748 flags.go:64] FLAG: --vmodule="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712490 4748 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.712500 4748 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712736 4748 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712748 4748 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712757 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712766 4748 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712776 4748 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712787 4748 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712798 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712818 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712828 4748 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712837 4748 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712845 4748 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712855 4748 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712863 4748 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712872 4748 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712883 4748 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712894 4748 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712904 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712914 4748 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712924 4748 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712933 4748 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712942 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712952 4748 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712960 4748 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712969 4748 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712977 4748 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712988 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.712997 4748 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713008 4748 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713018 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713028 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713036 4748 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713045 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713053 4748 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713062 4748 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713071 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713079 4748 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713087 4748 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713096 4748 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713104 4748 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713112 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713121 4748 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713129 4748 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713138 4748 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713148 4748 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713157 4748 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713165 4748 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713173 4748 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713182 4748 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713190 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713199 4748 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713207 4748 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713215 4748 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713224 4748 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713232 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713241 4748 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713249 4748 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713257 4748 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713266 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713274 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713282 4748 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713291 4748 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713299 4748 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713307 4748 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713315 4748 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713324 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713332 4748 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713340 4748 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713351 4748 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713362 4748 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713372 4748 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.713382 4748 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.714552 4748 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.732497 4748 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.732583 4748 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732821 4748 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732845 4748 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732857 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732868 4748 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732879 4748 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732890 4748 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732900 4748 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732911 4748 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732922 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732934 4748 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732947 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732959 4748 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732971 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732982 4748 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.732992 4748 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733003 4748 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733014 4748 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733024 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733035 4748 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733048 4748 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733062 4748 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733078 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733090 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733101 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733115 4748 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733133 4748 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733145 4748 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733159 4748 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733172 4748 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733183 4748 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733193 4748 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733203 4748 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733214 4748 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733224 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733234 4748 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733245 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733257 4748 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733270 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733281 4748 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733292 4748 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733301 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733312 4748 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733323 4748 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733334 4748 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733348 4748 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733362 4748 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733373 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733384 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733396 4748 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733407 4748 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733417 4748 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733428 4748 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733442 4748 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733455 4748 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733467 4748 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733503 4748 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733514 4748 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733524 4748 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733535 4748 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733547 4748 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733598 4748 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733612 4748 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733623 4748 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733633 4748 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733644 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733654 4748 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733664 4748 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733673 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733683 4748 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733694 4748 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.733704 4748 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.733757 4748 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734057 4748 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734079 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734090 4748 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734099 4748 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734107 4748 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734116 4748 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734125 4748 feature_gate.go:330] unrecognized feature gate: Example Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734135 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734143 4748 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734152 4748 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734160 4748 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734169 4748 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734179 4748 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734188 4748 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734198 4748 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734209 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734221 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734233 4748 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734244 4748 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734255 4748 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734264 4748 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734282 4748 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734301 4748 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734313 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734323 4748 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734335 4748 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734346 4748 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734356 4748 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734367 4748 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734377 4748 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734388 4748 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734399 4748 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734410 4748 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734420 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734431 4748 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734442 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734452 4748 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734466 4748 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734477 4748 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734488 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734499 4748 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734510 4748 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734523 4748 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734535 4748 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734546 4748 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734560 4748 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734571 4748 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734580 4748 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734591 4748 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734602 4748 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734614 4748 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734625 4748 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734636 4748 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734650 4748 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734664 4748 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734677 4748 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734689 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734700 4748 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734747 4748 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734763 4748 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734772 4748 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734780 4748 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734788 4748 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734797 4748 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734805 4748 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734813 4748 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734821 4748 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734829 4748 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734837 4748 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734845 4748 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.734853 4748 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.734866 4748 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.736075 4748 server.go:940] "Client rotation is on, will bootstrap in background" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.742284 4748 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.742494 4748 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.744219 4748 server.go:997] "Starting client certificate rotation" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.744274 4748 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.745171 4748 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-09 00:22:21.268223447 +0000 UTC Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.745306 4748 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.770296 4748 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.773200 4748 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.774946 4748 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.794597 4748 log.go:25] "Validated CRI v1 runtime API" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.840787 4748 log.go:25] "Validated CRI v1 image API" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.843966 4748 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.850998 4748 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-16-14-48-20-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.851052 4748 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.884399 4748 manager.go:217] Machine: {Timestamp:2026-02-16 14:52:54.880959197 +0000 UTC m=+0.572628306 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:657f6a80-f47d-43a3-b297-9137ed51b75e BootID:d233da3b-0bcf-41f1-88d1-a438f140df6f Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a5:0a:eb Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a5:0a:eb Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:34:a3:5f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4b:92:bf Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8d:f1:87 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3b:9c:c4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:52:0a:03:a1:32:60 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:c2:b4:e9:ba:58:a9 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.884869 4748 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.885208 4748 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.886829 4748 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.887170 4748 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.887219 4748 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.887566 4748 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.887589 4748 container_manager_linux.go:303] "Creating device plugin manager" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.888268 4748 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.888329 4748 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.888656 4748 state_mem.go:36] "Initialized new in-memory state store" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.888851 4748 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.893779 4748 kubelet.go:418] "Attempting to sync node with API server" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.893829 4748 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.893884 4748 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.893955 4748 kubelet.go:324] "Adding apiserver pod source" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.893981 4748 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.899859 4748 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.899891 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.900026 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.900040 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.900182 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.900977 4748 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.904011 4748 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.905839 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.905889 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.905906 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.905959 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.905986 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.906000 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.906015 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.906037 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.906055 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.906070 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.906090 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.906104 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.907142 4748 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.908076 4748 server.go:1280] "Started kubelet" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.909536 4748 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.910331 4748 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.909531 4748 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.911123 4748 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 14:52:54 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.912857 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.912922 4748 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.913696 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 23:16:57.933592404 +0000 UTC Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.914292 4748 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.914322 4748 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.914428 4748 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.915292 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.915397 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.915597 4748 factory.go:55] Registering systemd factory Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.915629 4748 factory.go:221] Registration of the systemd container factory successfully Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.915737 4748 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.920454 4748 factory.go:153] Registering CRI-O factory Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.920502 4748 factory.go:221] Registration of the crio container factory successfully Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.920630 4748 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.920673 4748 factory.go:103] Registering Raw factory Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.920709 4748 manager.go:1196] Started watching for new ooms in manager Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.926625 4748 server.go:460] "Adding debug handlers to kubelet server" Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.927024 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.927388 4748 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c1bdeb885adb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:52:54.908009179 +0000 UTC m=+0.599678248,LastTimestamp:2026-02-16 14:52:54.908009179 +0000 UTC m=+0.599678248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.930910 4748 manager.go:319] Starting recovery of all containers Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.934694 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.934813 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.934847 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.934879 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.934907 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935066 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935096 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935126 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935161 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935190 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935268 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935299 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935328 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935365 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935427 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935454 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935485 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935512 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935541 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935566 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935592 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935620 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935685 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935778 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935819 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.935845 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938133 4748 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938199 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938231 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938254 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938274 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938293 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938311 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938329 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938351 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938370 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938396 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938423 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938473 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938494 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938513 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938532 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938550 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938568 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938600 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938664 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938710 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938754 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938772 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938790 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938808 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938828 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938847 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938873 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938894 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938914 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938934 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938956 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938974 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.938993 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.939013 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.939031 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.939051 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941658 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941682 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941701 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941747 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941769 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941786 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941803 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941823 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941840 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941860 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941880 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941896 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941914 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941933 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941950 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941971 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.941990 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942037 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942061 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942081 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942100 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942118 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942138 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942157 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942176 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942196 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942215 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942234 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942253 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942274 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942293 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942310 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942347 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942369 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942389 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942407 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942427 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942444 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942466 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942484 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942502 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942521 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942549 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942569 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942590 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942613 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942636 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942656 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942677 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942699 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942744 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942763 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942817 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942837 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942922 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942943 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942962 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.942980 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943000 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943018 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943037 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943058 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943076 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943095 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943113 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943132 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943151 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943169 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943188 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943205 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943227 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943247 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943265 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943284 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943303 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943320 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943339 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943398 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943420 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943440 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943467 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943486 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943505 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943524 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943541 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943561 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943580 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943600 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943618 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943637 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943657 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943689 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943708 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943749 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943768 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943786 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943804 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943831 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943854 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943876 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943897 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943919 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943938 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943958 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943978 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.943997 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944016 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944036 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944136 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944158 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944181 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944200 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944221 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944242 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944264 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944283 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944303 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944322 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944342 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944363 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944384 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944402 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944420 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944440 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944459 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944480 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944498 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944701 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944762 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944786 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944805 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944825 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944853 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944878 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944901 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944922 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944944 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944963 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.944983 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945003 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945023 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945051 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945077 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945104 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945129 4748 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945148 4748 reconstruct.go:97] "Volume reconstruction finished" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.945167 4748 reconciler.go:26] "Reconciler: start to sync state" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.966292 4748 manager.go:324] Recovery completed Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.978977 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.982607 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.982681 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.982701 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.986038 4748 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.986202 4748 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.986311 4748 state_mem.go:36] "Initialized new in-memory state store" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.989742 4748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.992977 4748 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.993025 4748 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 16 14:52:54 crc kubenswrapper[4748]: I0216 14:52:54.993065 4748 kubelet.go:2335] "Starting kubelet main sync loop" Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.993110 4748 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 14:52:54 crc kubenswrapper[4748]: W0216 14:52:54.994075 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:54 crc kubenswrapper[4748]: E0216 14:52:54.994232 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.006546 4748 policy_none.go:49] "None policy: Start" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.007503 4748 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.007526 4748 state_mem.go:35] "Initializing new in-memory state store" Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.015790 4748 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.073630 4748 manager.go:334] "Starting Device Plugin manager" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.073962 4748 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.073999 4748 server.go:79] "Starting device plugin registration server" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.074793 4748 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.074824 4748 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.075276 4748 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.075392 4748 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.075407 4748 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.085302 4748 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.093574 4748 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.093685 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.095154 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.095182 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.095193 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.095344 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.095491 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.095542 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097326 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097365 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097332 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097439 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097451 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097583 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097708 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.097745 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.098765 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.098787 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.098797 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.098921 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.098933 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.098941 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.099086 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.099222 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.099258 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100059 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100084 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100176 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100188 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100196 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100261 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100367 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.100397 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102126 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102139 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102341 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102400 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102455 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102749 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.102798 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.104106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.104166 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.104183 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.128879 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.148669 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.148733 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.148766 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.148844 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.148954 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149020 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149064 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149097 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149124 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149146 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149171 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149204 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149234 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149256 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.149282 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.175400 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.176331 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.176381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.176402 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.176439 4748 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.177133 4748 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250217 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250280 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250321 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250352 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250390 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250419 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250448 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250467 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250469 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250535 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250580 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250487 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250633 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250593 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250742 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250790 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250597 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250813 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250853 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250872 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250892 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250946 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250987 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.251019 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.251040 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.251048 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.251092 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.251084 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.250987 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.251161 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.377529 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.379340 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.379394 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.379419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.379454 4748 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.379944 4748 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.433043 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.439167 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.454868 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.474343 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.481954 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:52:55 crc kubenswrapper[4748]: W0216 14:52:55.486476 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-40c17d214bd1bd1ddac00496e667eaecef7b2abcd29cff19f885be5f33c7c2f7 WatchSource:0}: Error finding container 40c17d214bd1bd1ddac00496e667eaecef7b2abcd29cff19f885be5f33c7c2f7: Status 404 returned error can't find the container with id 40c17d214bd1bd1ddac00496e667eaecef7b2abcd29cff19f885be5f33c7c2f7 Feb 16 14:52:55 crc kubenswrapper[4748]: W0216 14:52:55.488135 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-2ee86102b7ddebc70d38c4c97bcd24d8d029997a4120a17f63d95146b0d11b99 WatchSource:0}: Error finding container 2ee86102b7ddebc70d38c4c97bcd24d8d029997a4120a17f63d95146b0d11b99: Status 404 returned error can't find the container with id 2ee86102b7ddebc70d38c4c97bcd24d8d029997a4120a17f63d95146b0d11b99 Feb 16 14:52:55 crc kubenswrapper[4748]: W0216 14:52:55.496993 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6b9386c5977e0f3e91e70d4a5ede020651666e71f765b6f7fbf4060a874bc052 WatchSource:0}: Error finding container 6b9386c5977e0f3e91e70d4a5ede020651666e71f765b6f7fbf4060a874bc052: Status 404 returned error can't find the container with id 6b9386c5977e0f3e91e70d4a5ede020651666e71f765b6f7fbf4060a874bc052 Feb 16 14:52:55 crc kubenswrapper[4748]: W0216 14:52:55.497889 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-b243f6ae1abb06eefc6d09ad426bb5ae85654127591c76dcae6423a168f7ab11 WatchSource:0}: Error finding container b243f6ae1abb06eefc6d09ad426bb5ae85654127591c76dcae6423a168f7ab11: Status 404 returned error can't find the container with id b243f6ae1abb06eefc6d09ad426bb5ae85654127591c76dcae6423a168f7ab11 Feb 16 14:52:55 crc kubenswrapper[4748]: W0216 14:52:55.509258 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e0dc6473f776d8427030411b8425af1b426dc3c92a38bfdbe88d4fde10f9910a WatchSource:0}: Error finding container e0dc6473f776d8427030411b8425af1b426dc3c92a38bfdbe88d4fde10f9910a: Status 404 returned error can't find the container with id e0dc6473f776d8427030411b8425af1b426dc3c92a38bfdbe88d4fde10f9910a Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.530517 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.780603 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.782748 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.782802 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.782819 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.782850 4748 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.783433 4748 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Feb 16 14:52:55 crc kubenswrapper[4748]: W0216 14:52:55.826550 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:55 crc kubenswrapper[4748]: E0216 14:52:55.826659 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.911780 4748 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.914958 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 08:09:32.33814668 +0000 UTC Feb 16 14:52:55 crc kubenswrapper[4748]: I0216 14:52:55.999199 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6b9386c5977e0f3e91e70d4a5ede020651666e71f765b6f7fbf4060a874bc052"} Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.000832 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"40c17d214bd1bd1ddac00496e667eaecef7b2abcd29cff19f885be5f33c7c2f7"} Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.003141 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2ee86102b7ddebc70d38c4c97bcd24d8d029997a4120a17f63d95146b0d11b99"} Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.007157 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e0dc6473f776d8427030411b8425af1b426dc3c92a38bfdbe88d4fde10f9910a"} Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.008814 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b243f6ae1abb06eefc6d09ad426bb5ae85654127591c76dcae6423a168f7ab11"} Feb 16 14:52:56 crc kubenswrapper[4748]: W0216 14:52:56.046141 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:56 crc kubenswrapper[4748]: E0216 14:52:56.046295 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:56 crc kubenswrapper[4748]: W0216 14:52:56.107031 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:56 crc kubenswrapper[4748]: E0216 14:52:56.107198 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:56 crc kubenswrapper[4748]: E0216 14:52:56.331786 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Feb 16 14:52:56 crc kubenswrapper[4748]: W0216 14:52:56.350401 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:56 crc kubenswrapper[4748]: E0216 14:52:56.350502 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.584342 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.586476 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.586526 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.586537 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.586567 4748 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:52:56 crc kubenswrapper[4748]: E0216 14:52:56.587096 4748 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.912662 4748 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.915881 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:39:26.638803879 +0000 UTC Feb 16 14:52:56 crc kubenswrapper[4748]: I0216 14:52:56.934145 4748 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:52:56 crc kubenswrapper[4748]: E0216 14:52:56.935927 4748 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.013520 4748 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1" exitCode=0 Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.013600 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1"} Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.013741 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.015165 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.015196 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.015208 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.016245 4748 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3" exitCode=0 Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.016348 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3"} Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.016492 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.018387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.018426 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.018443 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.020226 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290"} Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.020262 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4"} Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.020275 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7"} Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.023293 4748 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756" exitCode=0 Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.023384 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756"} Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.023425 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.024217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.024256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.024269 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.028113 4748 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b" exitCode=0 Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.028226 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b"} Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.028612 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.030973 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.031020 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.031035 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.049500 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.050657 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.050780 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.050795 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:57 crc kubenswrapper[4748]: E0216 14:52:57.069534 4748 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894c1bdeb885adb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:52:54.908009179 +0000 UTC m=+0.599678248,LastTimestamp:2026-02-16 14:52:54.908009179 +0000 UTC m=+0.599678248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:52:57 crc kubenswrapper[4748]: W0216 14:52:57.695739 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:57 crc kubenswrapper[4748]: E0216 14:52:57.696678 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.911031 4748 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:57 crc kubenswrapper[4748]: I0216 14:52:57.916528 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:17:20.697581247 +0000 UTC Feb 16 14:52:57 crc kubenswrapper[4748]: E0216 14:52:57.933432 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="3.2s" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.034046 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.034093 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.034104 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.034200 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.035344 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.035396 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.035410 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.038256 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.038359 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.040113 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.040144 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.040156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.044540 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.044638 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.044662 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.044683 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.050499 4748 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec" exitCode=0 Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.050610 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.050849 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.052263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.052312 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.052342 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.053784 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937"} Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.053981 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.055044 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.055091 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.055101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.188100 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.189393 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.189423 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.189432 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.189457 4748 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:52:58 crc kubenswrapper[4748]: E0216 14:52:58.189870 4748 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.51:6443: connect: connection refused" node="crc" Feb 16 14:52:58 crc kubenswrapper[4748]: W0216 14:52:58.278657 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.51:6443: connect: connection refused Feb 16 14:52:58 crc kubenswrapper[4748]: E0216 14:52:58.278765 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.51:6443: connect: connection refused" logger="UnhandledError" Feb 16 14:52:58 crc kubenswrapper[4748]: I0216 14:52:58.916764 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 10:02:27.242856691 +0000 UTC Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.061016 4748 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d" exitCode=0 Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.061083 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d"} Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.061209 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.062667 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.062698 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.062706 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.067262 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.067284 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55"} Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.067339 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.067320 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.067386 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.067409 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.068700 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.068748 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.068763 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.069015 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.069095 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.069121 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.069169 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.069192 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.069203 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.070138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.070167 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.070179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:52:59 crc kubenswrapper[4748]: I0216 14:52:59.917858 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:14:50.342391119 +0000 UTC Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.077491 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.077574 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.078396 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9ee5a1b79b7a97ad89754006e9477aba1132bdbdc85e34c90bae719ed5830306"} Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.078455 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"46bca6374693df66963dfed053db555018b3455f47c662346a68245715a5c9db"} Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.078481 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b27e93608fba4279b953fa1dcb00de496850daa23c2c3d92e739d7358fe9bb81"} Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.079436 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.079494 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.079513 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.213969 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.449101 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.449384 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.456136 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.456189 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.456209 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:00 crc kubenswrapper[4748]: I0216 14:53:00.918065 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 01:00:56.870265394 +0000 UTC Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.089816 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d53d8722f631552f16e54360803aa7a60d383f56d050cb9faacd0249fb1d185d"} Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.089908 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.089966 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.089908 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c96d10fa620a20fed7f80f566f8f7b1aab97ef7b84bdb4fc48e4b0f52d6fd25d"} Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.089967 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.091403 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.091472 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.091495 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.091510 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.091550 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.091573 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.226052 4748 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.390811 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.392688 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.392812 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.392833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.392878 4748 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:01 crc kubenswrapper[4748]: I0216 14:53:01.918805 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 17:16:45.962562285 +0000 UTC Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.092923 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.094113 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.094179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.094192 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.356316 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.534338 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.534599 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.536353 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.536446 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.536532 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.545395 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:02 crc kubenswrapper[4748]: I0216 14:53:02.919674 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:57:09.418861163 +0000 UTC Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.095314 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.095385 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.096419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.096448 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.096460 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.096891 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.096931 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.096944 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.796239 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.796514 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.796581 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.798427 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.798540 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.798563 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:03 crc kubenswrapper[4748]: I0216 14:53:03.919965 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 05:31:31.030718899 +0000 UTC Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.832802 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.833171 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.835080 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.835138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.835156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.849664 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.849984 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.852128 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.852208 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.852249 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:04 crc kubenswrapper[4748]: I0216 14:53:04.920495 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 15:08:17.047980138 +0000 UTC Feb 16 14:53:05 crc kubenswrapper[4748]: E0216 14:53:05.085448 4748 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.158473 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.158731 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.160553 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.160633 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.160653 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.638952 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.639296 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.641543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.641627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.641650 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:05 crc kubenswrapper[4748]: I0216 14:53:05.921211 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:01:43.800874935 +0000 UTC Feb 16 14:53:06 crc kubenswrapper[4748]: I0216 14:53:06.921748 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 17:25:23.273927362 +0000 UTC Feb 16 14:53:07 crc kubenswrapper[4748]: I0216 14:53:07.589665 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:07 crc kubenswrapper[4748]: I0216 14:53:07.589888 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:07 crc kubenswrapper[4748]: I0216 14:53:07.592147 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:07 crc kubenswrapper[4748]: I0216 14:53:07.592188 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:07 crc kubenswrapper[4748]: I0216 14:53:07.592199 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:07 crc kubenswrapper[4748]: I0216 14:53:07.595806 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:07 crc kubenswrapper[4748]: I0216 14:53:07.922327 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:32:23.405572698 +0000 UTC Feb 16 14:53:08 crc kubenswrapper[4748]: I0216 14:53:08.111285 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:08 crc kubenswrapper[4748]: I0216 14:53:08.112485 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:08 crc kubenswrapper[4748]: I0216 14:53:08.112528 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:08 crc kubenswrapper[4748]: I0216 14:53:08.112547 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:08 crc kubenswrapper[4748]: I0216 14:53:08.912778 4748 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 14:53:08 crc kubenswrapper[4748]: I0216 14:53:08.923271 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 14:06:09.846070812 +0000 UTC Feb 16 14:53:08 crc kubenswrapper[4748]: W0216 14:53:08.967904 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 14:53:08 crc kubenswrapper[4748]: I0216 14:53:08.968039 4748 trace.go:236] Trace[832039314]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:52:58.965) (total time: 10002ms): Feb 16 14:53:08 crc kubenswrapper[4748]: Trace[832039314]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:53:08.967) Feb 16 14:53:08 crc kubenswrapper[4748]: Trace[832039314]: [10.002011395s] [10.002011395s] END Feb 16 14:53:08 crc kubenswrapper[4748]: E0216 14:53:08.968074 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 14:53:09 crc kubenswrapper[4748]: W0216 14:53:09.062248 4748 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 16 14:53:09 crc kubenswrapper[4748]: I0216 14:53:09.062351 4748 trace.go:236] Trace[811351795]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:52:59.061) (total time: 10001ms): Feb 16 14:53:09 crc kubenswrapper[4748]: Trace[811351795]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:53:09.062) Feb 16 14:53:09 crc kubenswrapper[4748]: Trace[811351795]: [10.001205504s] [10.001205504s] END Feb 16 14:53:09 crc kubenswrapper[4748]: E0216 14:53:09.062377 4748 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 16 14:53:09 crc kubenswrapper[4748]: I0216 14:53:09.436798 4748 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 14:53:09 crc kubenswrapper[4748]: I0216 14:53:09.436895 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 14:53:09 crc kubenswrapper[4748]: I0216 14:53:09.444461 4748 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 14:53:09 crc kubenswrapper[4748]: I0216 14:53:09.444541 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 14:53:09 crc kubenswrapper[4748]: I0216 14:53:09.923652 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 22:10:39.704848818 +0000 UTC Feb 16 14:53:10 crc kubenswrapper[4748]: I0216 14:53:10.294460 4748 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]log ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]etcd ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/generic-apiserver-start-informers ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/priority-and-fairness-filter ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-apiextensions-informers ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-apiextensions-controllers ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/crd-informer-synced ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-system-namespaces-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 16 14:53:10 crc kubenswrapper[4748]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 16 14:53:10 crc kubenswrapper[4748]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/priority-and-fairness-config-producer ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/bootstrap-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/start-kube-aggregator-informers ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/apiservice-registration-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/apiservice-discovery-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]autoregister-completion ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/apiservice-openapi-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 16 14:53:10 crc kubenswrapper[4748]: livez check failed Feb 16 14:53:10 crc kubenswrapper[4748]: I0216 14:53:10.294561 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:53:10 crc kubenswrapper[4748]: I0216 14:53:10.589813 4748 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:53:10 crc kubenswrapper[4748]: I0216 14:53:10.589895 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:53:10 crc kubenswrapper[4748]: I0216 14:53:10.924239 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 18:32:23.274956586 +0000 UTC Feb 16 14:53:11 crc kubenswrapper[4748]: I0216 14:53:11.924755 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:49:57.75972716 +0000 UTC Feb 16 14:53:12 crc kubenswrapper[4748]: I0216 14:53:12.391827 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 14:53:12 crc kubenswrapper[4748]: I0216 14:53:12.392476 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:12 crc kubenswrapper[4748]: I0216 14:53:12.393645 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:12 crc kubenswrapper[4748]: I0216 14:53:12.393680 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:12 crc kubenswrapper[4748]: I0216 14:53:12.393690 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:12 crc kubenswrapper[4748]: I0216 14:53:12.409217 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 14:53:12 crc kubenswrapper[4748]: I0216 14:53:12.925450 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:06:27.619829904 +0000 UTC Feb 16 14:53:13 crc kubenswrapper[4748]: I0216 14:53:13.127046 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:13 crc kubenswrapper[4748]: I0216 14:53:13.128472 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:13 crc kubenswrapper[4748]: I0216 14:53:13.128520 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:13 crc kubenswrapper[4748]: I0216 14:53:13.128533 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:13 crc kubenswrapper[4748]: I0216 14:53:13.926263 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:08:23.273965656 +0000 UTC Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.398675 4748 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:14 crc kubenswrapper[4748]: E0216 14:53:14.435400 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 14:53:14 crc kubenswrapper[4748]: E0216 14:53:14.441277 4748 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.441456 4748 trace.go:236] Trace[121637984]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:04.072) (total time: 10368ms): Feb 16 14:53:14 crc kubenswrapper[4748]: Trace[121637984]: ---"Objects listed" error: 10368ms (14:53:14.441) Feb 16 14:53:14 crc kubenswrapper[4748]: Trace[121637984]: [10.368397091s] [10.368397091s] END Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.441500 4748 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.442932 4748 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.443351 4748 trace.go:236] Trace[1596401301]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 14:53:01.703) (total time: 12739ms): Feb 16 14:53:14 crc kubenswrapper[4748]: Trace[1596401301]: ---"Objects listed" error: 12739ms (14:53:14.443) Feb 16 14:53:14 crc kubenswrapper[4748]: Trace[1596401301]: [12.739401861s] [12.739401861s] END Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.443384 4748 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.459660 4748 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.467889 4748 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57062->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.467972 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:57062->192.168.126.11:17697: read: connection reset by peer" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.479400 4748 csr.go:261] certificate signing request csr-jgjf2 is approved, waiting to be issued Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.490195 4748 csr.go:257] certificate signing request csr-jgjf2 is issued Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.744652 4748 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 14:53:14 crc kubenswrapper[4748]: W0216 14:53:14.745044 4748 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:14 crc kubenswrapper[4748]: W0216 14:53:14.745072 4748 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:14 crc kubenswrapper[4748]: W0216 14:53:14.745106 4748 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 16 14:53:14 crc kubenswrapper[4748]: E0216 14:53:14.745195 4748 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.51:40324->38.102.83.51:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894c1be0eaf7af7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:52:55.497775863 +0000 UTC m=+1.189444932,LastTimestamp:2026-02-16 14:52:55.497775863 +0000 UTC m=+1.189444932,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.834262 4748 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.834346 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.906630 4748 apiserver.go:52] "Watching apiserver" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.916599 4748 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.917035 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-xbqqk","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.917475 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.917492 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.917589 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.917785 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:14 crc kubenswrapper[4748]: E0216 14:53:14.917823 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:14 crc kubenswrapper[4748]: E0216 14:53:14.917953 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.918761 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.918787 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.919135 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:14 crc kubenswrapper[4748]: E0216 14:53:14.919227 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.924517 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.924548 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.924573 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.924517 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.924687 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.924837 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.924975 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.925580 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.925888 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.926090 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.927134 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 00:42:24.691968748 +0000 UTC Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.927623 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.927674 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.957858 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.970950 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.984196 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:14 crc kubenswrapper[4748]: I0216 14:53:14.998850 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.012030 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.015445 4748 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.024089 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.037098 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.046976 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047036 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047085 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047118 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047154 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047186 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047217 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047249 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047284 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047311 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047342 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047371 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047404 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047437 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047471 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047501 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047532 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047564 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047556 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047597 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047630 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047659 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047691 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047748 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047795 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047783 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047870 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047897 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047917 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047929 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047946 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047962 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.047981 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048000 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048025 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048047 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048068 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048094 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048122 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048131 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048147 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048201 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048234 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048261 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048284 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048309 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048308 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048331 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048324 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048356 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048337 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048383 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048346 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048409 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048435 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048463 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048485 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048507 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048531 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048556 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048579 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048606 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048605 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048632 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048656 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048680 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048704 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048750 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048771 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048792 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048814 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048838 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048862 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048886 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048908 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048955 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048980 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049002 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049028 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049056 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049087 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049113 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049138 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049161 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049183 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049205 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049232 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049256 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049285 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049310 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049334 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049357 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049379 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049401 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049423 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049446 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049473 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049500 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049524 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049548 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049572 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049598 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049623 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049648 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049672 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049697 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049739 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049766 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049790 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049816 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049838 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049864 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049892 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049923 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049945 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049969 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049993 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050018 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050042 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050067 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050091 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050115 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050138 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050160 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050179 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050202 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050228 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050250 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050277 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050303 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050327 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050352 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050374 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050399 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050427 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050455 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050481 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050504 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050527 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050547 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050568 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050592 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050615 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050639 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050664 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050689 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050730 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050757 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050781 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050806 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050829 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050852 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050879 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050908 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050938 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050963 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050987 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051012 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051036 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051061 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051139 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051165 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051191 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051215 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051244 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051271 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051300 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051328 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051354 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051382 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051408 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051435 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051458 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051483 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051509 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051620 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051653 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051678 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051705 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051815 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051840 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051863 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051889 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051913 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051935 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051961 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051986 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052008 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052037 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052065 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052089 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052111 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052136 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052159 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052183 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052209 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052234 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052257 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052279 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052301 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052326 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052354 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052383 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052410 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052470 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052505 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052531 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052561 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052597 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052625 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1624cecb-4def-4246-9cd2-b9a6f4e5920c-hosts-file\") pod \"node-resolver-xbqqk\" (UID: \"1624cecb-4def-4246-9cd2-b9a6f4e5920c\") " pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052654 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h96pk\" (UniqueName: \"kubernetes.io/projected/1624cecb-4def-4246-9cd2-b9a6f4e5920c-kube-api-access-h96pk\") pod \"node-resolver-xbqqk\" (UID: \"1624cecb-4def-4246-9cd2-b9a6f4e5920c\") " pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052690 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052736 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052769 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052806 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052836 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052865 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052895 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052925 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052971 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053131 4748 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053149 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053164 4748 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053178 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053194 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053208 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053225 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053243 4748 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053257 4748 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.048836 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.054045 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049030 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049121 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049214 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049254 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049353 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049375 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049442 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049484 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049656 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.049744 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050027 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050189 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050193 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050190 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050212 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050398 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050514 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050564 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.050928 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051207 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051303 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051808 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051830 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.051839 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052021 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052327 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052334 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052348 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052373 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052416 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052649 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.052748 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053342 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053373 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053790 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053857 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.053890 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.054086 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.054190 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.054135 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.054639 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.054645 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.054850 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055059 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055084 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055097 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055337 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055660 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055740 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055887 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055925 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.055971 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.056022 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.056280 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.056658 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.057008 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.057200 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.057403 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.057438 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.058427 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.058487 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.058673 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.058818 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.059070 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.059065 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.059172 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.059193 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.059680 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.059747 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.059792 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.060149 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.060328 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.060368 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.060793 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.060836 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.060896 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.061171 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.061310 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.061465 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.062159 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.062831 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.063370 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.063379 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.063541 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.063949 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.064053 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.064468 4748 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.064597 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.064680 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.064912 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.065422 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.065493 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.065613 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.065693 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.065756 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.066231 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.066325 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.067100 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.067235 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.067492 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.067675 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.066099 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.067939 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.068018 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.068171 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.068236 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.069043 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.068873 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.069160 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:15.567154705 +0000 UTC m=+21.258823744 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.069375 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.069467 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.069553 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.070303 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.070914 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.071567 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.071636 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.071669 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:15.571630703 +0000 UTC m=+21.263299912 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.071782 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.072076 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.072426 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.072446 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.072500 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.073232 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.073456 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.073821 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.073883 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.073985 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.074092 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.075570 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.075649 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.075811 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.076074 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.076396 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.077178 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.077184 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.077484 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.077514 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.077543 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.077630 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:15.577603466 +0000 UTC m=+21.269272515 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.077783 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:15.57773895 +0000 UTC m=+21.269407999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078167 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078285 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078435 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078476 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078640 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078869 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078904 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.078992 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079061 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079151 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079159 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079251 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079266 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079351 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079387 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079461 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079599 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.079750 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.080064 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.080263 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.080296 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.081418 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.081560 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.081741 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.082199 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.082370 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.082611 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.087788 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.089067 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.090856 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.096429 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.098475 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.098491 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.099101 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.099099 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.099669 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.099795 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.100507 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.106051 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.106360 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.106437 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.106498 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.106601 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:15.606582574 +0000 UTC m=+21.298251613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.109440 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.114813 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.114998 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.115602 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.115647 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.116650 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.116933 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.117340 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.117827 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.117848 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.118044 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.120949 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.121051 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.121125 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.121539 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.121862 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.121976 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.121985 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.122884 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.123781 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.126907 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.129107 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.134427 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.137342 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.140492 4748 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55" exitCode=255 Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.140530 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55"} Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.144894 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.146102 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.146926 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.153709 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.153784 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1624cecb-4def-4246-9cd2-b9a6f4e5920c-hosts-file\") pod \"node-resolver-xbqqk\" (UID: \"1624cecb-4def-4246-9cd2-b9a6f4e5920c\") " pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.153815 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h96pk\" (UniqueName: \"kubernetes.io/projected/1624cecb-4def-4246-9cd2-b9a6f4e5920c-kube-api-access-h96pk\") pod \"node-resolver-xbqqk\" (UID: \"1624cecb-4def-4246-9cd2-b9a6f4e5920c\") " pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.153828 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.153863 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.154030 4748 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.154048 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.154063 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.154078 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.154093 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.154340 4748 scope.go:117] "RemoveContainer" containerID="6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.154598 4748 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.155802 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1624cecb-4def-4246-9cd2-b9a6f4e5920c-hosts-file\") pod \"node-resolver-xbqqk\" (UID: \"1624cecb-4def-4246-9cd2-b9a6f4e5920c\") " pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.155832 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.155887 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157058 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157098 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157119 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157148 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157163 4748 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157177 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157190 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157212 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157225 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157238 4748 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157265 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157301 4748 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157315 4748 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157329 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157345 4748 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157367 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157381 4748 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157396 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157411 4748 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157436 4748 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157454 4748 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157554 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157582 4748 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157600 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157630 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157643 4748 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157664 4748 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157678 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157692 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157705 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157777 4748 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157791 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157805 4748 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157825 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157838 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157857 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157870 4748 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157888 4748 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157901 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157915 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157929 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.157948 4748 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.158162 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.158475 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159865 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159902 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159919 4748 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159935 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159958 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159971 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159983 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.159998 4748 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160012 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160026 4748 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160037 4748 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160053 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160064 4748 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160075 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160085 4748 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160100 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160113 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160126 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160143 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160163 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160174 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160185 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160201 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160212 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160223 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160233 4748 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160248 4748 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160259 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160270 4748 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160281 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160297 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160307 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160319 4748 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160335 4748 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160347 4748 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160359 4748 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160371 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160386 4748 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160397 4748 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160408 4748 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160419 4748 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160433 4748 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160443 4748 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160454 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160467 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160482 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160494 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160505 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160521 4748 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160531 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160542 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160553 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160568 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160579 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160588 4748 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160599 4748 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160612 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160622 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160633 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160646 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160659 4748 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160826 4748 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160842 4748 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160856 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160867 4748 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160878 4748 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160893 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160909 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160924 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160934 4748 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160944 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160957 4748 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160968 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.160979 4748 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161014 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161025 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161036 4748 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161048 4748 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161063 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161075 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161107 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161124 4748 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161143 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161158 4748 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161169 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161187 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161197 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161207 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161219 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161235 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161245 4748 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161256 4748 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161270 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161309 4748 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161324 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161335 4748 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161345 4748 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161359 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161369 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161380 4748 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161423 4748 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161436 4748 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161448 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161467 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161629 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.161643 4748 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.162372 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.162953 4748 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.165875 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.165923 4748 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.165945 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.165965 4748 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.165983 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166001 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166018 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166038 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166058 4748 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166076 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166094 4748 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166113 4748 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166130 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166148 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166166 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166183 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166199 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166217 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166234 4748 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166252 4748 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166271 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.166398 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.169742 4748 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.169766 4748 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.169789 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.169811 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.169829 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.170296 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.176535 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.180186 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h96pk\" (UniqueName: \"kubernetes.io/projected/1624cecb-4def-4246-9cd2-b9a6f4e5920c-kube-api-access-h96pk\") pod \"node-resolver-xbqqk\" (UID: \"1624cecb-4def-4246-9cd2-b9a6f4e5920c\") " pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.189252 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.202612 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.217747 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.230744 4748 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.232313 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.238566 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.240876 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.252288 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.263307 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xbqqk" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.275152 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.278992 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.326471 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.356982 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.384938 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.418368 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.435580 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.451521 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.467612 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.482947 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.492607 4748 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-16 14:48:14 +0000 UTC, rotation deadline is 2026-11-02 12:05:51.50567735 +0000 UTC Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.492672 4748 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6213h12m36.013009033s for next certificate rotation Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.497036 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.511544 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.523679 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.533642 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.556213 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.586220 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.586334 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586402 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:16.586369202 +0000 UTC m=+22.278038241 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.586456 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.586503 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586526 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586549 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586562 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586639 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:16.586620738 +0000 UTC m=+22.278289777 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586656 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586707 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586735 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:16.586726881 +0000 UTC m=+22.278395920 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.586773 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:16.586764062 +0000 UTC m=+22.278433101 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.687053 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.687332 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.687374 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.687404 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: E0216 14:53:15.687497 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:16.687474286 +0000 UTC m=+22.379143325 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:15 crc kubenswrapper[4748]: I0216 14:53:15.927395 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 19:25:29.840102701 +0000 UTC Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.127922 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-p7ttg"] Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.128711 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.129084 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-dw679"] Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.129503 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.131026 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.131626 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.131931 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.133055 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.133273 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.133517 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.133789 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.134046 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.134271 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.134481 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.148126 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.148180 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1f63f8d36604e2be10179805a18422dc504b33c7c273b334d9e7c59f44fadfe8"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.159914 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.163235 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.165292 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.165908 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.167954 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.167981 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.167992 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"2c1d19277f447f42710fb225e42b101839a8b6f651e6a083a85c24fb8f4610f4"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.169513 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xbqqk" event={"ID":"1624cecb-4def-4246-9cd2-b9a6f4e5920c","Type":"ContainerStarted","Data":"9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.169536 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xbqqk" event={"ID":"1624cecb-4def-4246-9cd2-b9a6f4e5920c","Type":"ContainerStarted","Data":"8d9adb7508520d8245758080d1089b83bbac42ff321775d111780e3e6581a12a"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.170636 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"4f1000dcf0868681030df4a7675ddf3bc148dc2ef505ac3b71a4ae179454e81e"} Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.172269 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.178471 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.194415 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.208096 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.221104 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.238641 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.260103 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.288518 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.292953 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-cni-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.292998 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-hostroot\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293018 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-etc-kubernetes\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293045 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q57v\" (UniqueName: \"kubernetes.io/projected/1724aef8-25e0-40aa-86be-2ca7849960f1-kube-api-access-6q57v\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293241 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1724aef8-25e0-40aa-86be-2ca7849960f1-cni-binary-copy\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293283 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-socket-dir-parent\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293318 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-kubelet\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293352 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-cni-bin\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293462 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-os-release\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293526 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-netns\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293634 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-k8s-cni-cncf-io\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293796 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fafb0b41-fe7a-4d57-a714-4666580d6ae6-proxy-tls\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293833 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-system-cni-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293945 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-cni-multus\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.293989 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-daemon-config\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.294018 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-multus-certs\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.294039 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-conf-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.294438 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-cnibin\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.294517 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fafb0b41-fe7a-4d57-a714-4666580d6ae6-mcd-auth-proxy-config\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.294596 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fafb0b41-fe7a-4d57-a714-4666580d6ae6-rootfs\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.294618 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8vp7\" (UniqueName: \"kubernetes.io/projected/fafb0b41-fe7a-4d57-a714-4666580d6ae6-kube-api-access-d8vp7\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.321786 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.377985 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395179 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fafb0b41-fe7a-4d57-a714-4666580d6ae6-rootfs\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395232 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8vp7\" (UniqueName: \"kubernetes.io/projected/fafb0b41-fe7a-4d57-a714-4666580d6ae6-kube-api-access-d8vp7\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395381 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/fafb0b41-fe7a-4d57-a714-4666580d6ae6-rootfs\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395654 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-cni-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395675 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-cni-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395740 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-hostroot\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395785 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-hostroot\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395800 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-etc-kubernetes\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395850 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-etc-kubernetes\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395877 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1724aef8-25e0-40aa-86be-2ca7849960f1-cni-binary-copy\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395903 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-socket-dir-parent\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395928 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-kubelet\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395945 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q57v\" (UniqueName: \"kubernetes.io/projected/1724aef8-25e0-40aa-86be-2ca7849960f1-kube-api-access-6q57v\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395962 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-cni-bin\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395978 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-os-release\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.395996 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-netns\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396027 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fafb0b41-fe7a-4d57-a714-4666580d6ae6-proxy-tls\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396049 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-system-cni-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396066 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-k8s-cni-cncf-io\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396083 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-daemon-config\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396098 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-multus-certs\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396112 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-cni-multus\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396136 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-cnibin\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396156 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-conf-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396175 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fafb0b41-fe7a-4d57-a714-4666580d6ae6-mcd-auth-proxy-config\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396882 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fafb0b41-fe7a-4d57-a714-4666580d6ae6-mcd-auth-proxy-config\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396954 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-socket-dir-parent\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.396982 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-kubelet\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397080 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/1724aef8-25e0-40aa-86be-2ca7849960f1-cni-binary-copy\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397150 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-cni-bin\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397175 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-k8s-cni-cncf-io\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397749 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-os-release\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397836 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-daemon-config\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397839 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-var-lib-cni-multus\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397899 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-multus-conf-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397902 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-multus-certs\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.397983 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-host-run-netns\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.398073 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-system-cni-dir\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.398100 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/1724aef8-25e0-40aa-86be-2ca7849960f1-cnibin\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.408884 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fafb0b41-fe7a-4d57-a714-4666580d6ae6-proxy-tls\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.411896 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.428679 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q57v\" (UniqueName: \"kubernetes.io/projected/1724aef8-25e0-40aa-86be-2ca7849960f1-kube-api-access-6q57v\") pod \"multus-dw679\" (UID: \"1724aef8-25e0-40aa-86be-2ca7849960f1\") " pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.431742 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.434813 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8vp7\" (UniqueName: \"kubernetes.io/projected/fafb0b41-fe7a-4d57-a714-4666580d6ae6-kube-api-access-d8vp7\") pod \"machine-config-daemon-p7ttg\" (UID: \"fafb0b41-fe7a-4d57-a714-4666580d6ae6\") " pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.448919 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-dw679" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.452035 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.463124 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:53:16 crc kubenswrapper[4748]: W0216 14:53:16.480921 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfafb0b41_fe7a_4d57_a714_4666580d6ae6.slice/crio-50302017e5386f2ced974f1416b14053683f870eab0517d712df0a7d87057abf WatchSource:0}: Error finding container 50302017e5386f2ced974f1416b14053683f870eab0517d712df0a7d87057abf: Status 404 returned error can't find the container with id 50302017e5386f2ced974f1416b14053683f870eab0517d712df0a7d87057abf Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.486616 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.506186 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.512410 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r662f"] Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.513213 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-gt5ps"] Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.513837 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.513935 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.517627 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.517944 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.518085 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.518268 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.518290 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.518532 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.518641 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.518682 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.518691 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.528329 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.554277 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.568608 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.587102 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.599844 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.600166 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:18.600126753 +0000 UTC m=+24.291795802 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600629 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-netd\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600674 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600701 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600756 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600795 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600819 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-config\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600846 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cnibin\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600867 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-log-socket\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600887 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-systemd\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600908 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67j69\" (UniqueName: \"kubernetes.io/projected/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-kube-api-access-67j69\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600939 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600968 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.600993 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601037 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-slash\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601062 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-env-overrides\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601088 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovn-node-metrics-cert\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601117 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-kubelet\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601142 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-etc-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601165 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-script-lib\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601192 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-systemd-units\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601215 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-netns\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601239 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-node-log\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601265 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-system-cni-dir\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601299 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601327 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-os-release\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601361 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-bin\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601397 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdsgq\" (UniqueName: \"kubernetes.io/projected/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-kube-api-access-wdsgq\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601424 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-var-lib-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601452 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-ovn\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.601477 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.601599 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.601648 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:18.601635519 +0000 UTC m=+24.293304558 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.601832 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.601908 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.601931 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.601977 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.602010 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:18.601997348 +0000 UTC m=+24.293666387 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.602067 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:18.602055079 +0000 UTC m=+24.293724118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.606884 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.619604 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.630462 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.644293 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.658441 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.673560 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.686970 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.697214 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702226 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702259 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-netd\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702284 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702313 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702335 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702355 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-config\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702375 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cnibin\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702394 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-log-socket\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702414 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67j69\" (UniqueName: \"kubernetes.io/projected/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-kube-api-access-67j69\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702450 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702468 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-systemd\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702483 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702497 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-slash\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702511 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-env-overrides\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702527 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovn-node-metrics-cert\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702542 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-kubelet\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702555 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-etc-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702571 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-script-lib\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702587 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-netns\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702601 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-node-log\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702625 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-system-cni-dir\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702640 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-systemd-units\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702663 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-os-release\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702680 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdsgq\" (UniqueName: \"kubernetes.io/projected/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-kube-api-access-wdsgq\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702695 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-var-lib-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702729 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-bin\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702747 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702762 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-ovn\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702818 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-ovn\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702852 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-ovn-kubernetes\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702871 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-netd\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.702895 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.702971 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.702987 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.702996 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.703028 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:18.70301633 +0000 UTC m=+24.394685369 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.703054 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.703365 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-etc-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.703471 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cnibin\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.703554 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-log-socket\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.703590 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-config\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.703913 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.703951 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-systemd\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704022 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-script-lib\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704057 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-netns\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704080 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-node-log\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704101 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-system-cni-dir\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704122 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-systemd-units\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704290 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-os-release\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704517 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-var-lib-openvswitch\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704588 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-bin\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704655 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cni-binary-copy\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.704705 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-slash\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.705185 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.705265 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-kubelet\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.705323 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-env-overrides\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.710423 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovn-node-metrics-cert\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.713210 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.721672 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdsgq\" (UniqueName: \"kubernetes.io/projected/81c4b8fe-3db6-4720-81d0-a1d2d33470bb-kube-api-access-wdsgq\") pod \"multus-additional-cni-plugins-gt5ps\" (UID: \"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\") " pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.722356 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67j69\" (UniqueName: \"kubernetes.io/projected/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-kube-api-access-67j69\") pod \"ovnkube-node-r662f\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.729895 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.743381 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.761907 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:16Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.840175 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.846159 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" Feb 16 14:53:16 crc kubenswrapper[4748]: W0216 14:53:16.861652 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f88ea54_3399_4d84_bc96_5b7d9575bbf5.slice/crio-9902f07f2c6bc2c2b6676dc6c7a46a3bc3887f2e44d6b11b872eb627a49fcdd2 WatchSource:0}: Error finding container 9902f07f2c6bc2c2b6676dc6c7a46a3bc3887f2e44d6b11b872eb627a49fcdd2: Status 404 returned error can't find the container with id 9902f07f2c6bc2c2b6676dc6c7a46a3bc3887f2e44d6b11b872eb627a49fcdd2 Feb 16 14:53:16 crc kubenswrapper[4748]: W0216 14:53:16.862642 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81c4b8fe_3db6_4720_81d0_a1d2d33470bb.slice/crio-67afb3fcb189bde28d8ef632b72f4ae9609b7942fe2d34e44b6f009eaf2cbee1 WatchSource:0}: Error finding container 67afb3fcb189bde28d8ef632b72f4ae9609b7942fe2d34e44b6f009eaf2cbee1: Status 404 returned error can't find the container with id 67afb3fcb189bde28d8ef632b72f4ae9609b7942fe2d34e44b6f009eaf2cbee1 Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.929154 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 14:02:34.814682082 +0000 UTC Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.994327 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.994352 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.994436 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.994502 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.994622 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:16 crc kubenswrapper[4748]: E0216 14:53:16.994828 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:16 crc kubenswrapper[4748]: I0216 14:53:16.999179 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.000061 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.001469 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.002220 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.004242 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.004860 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.006138 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.006880 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.008681 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.009332 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.010031 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.011635 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.012298 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.013380 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.014096 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.015303 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.016033 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.016530 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.017729 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.018444 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.019038 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.020108 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.020535 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.021565 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.022046 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.023034 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.023639 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.024533 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.025195 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.026006 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.026445 4748 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.026537 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.028798 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.029685 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.030157 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.031699 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.032684 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.033185 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.034201 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.034863 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.035637 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.036235 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.037203 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.037793 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.038612 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.039253 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.040176 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.040884 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.041738 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.042222 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.043023 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.043505 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.044116 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.045076 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.177377 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerStarted","Data":"1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.177454 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerStarted","Data":"67afb3fcb189bde28d8ef632b72f4ae9609b7942fe2d34e44b6f009eaf2cbee1"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.179320 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.179391 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.179413 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"50302017e5386f2ced974f1416b14053683f870eab0517d712df0a7d87057abf"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.181318 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243" exitCode=0 Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.181391 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.181416 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"9902f07f2c6bc2c2b6676dc6c7a46a3bc3887f2e44d6b11b872eb627a49fcdd2"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.183224 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerStarted","Data":"e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.183298 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerStarted","Data":"a8dd4760a415c85536c55bfd52f4c0ae50e609bc1637c46fb92b37dd789135ff"} Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.198563 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.215811 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.233129 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.249341 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.267654 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.283954 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.293784 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.304385 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.339097 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.353739 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.366344 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.379207 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.393298 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.411580 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.428191 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.449686 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.468818 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.482907 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.512292 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.540806 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.561402 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.578621 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.597199 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.605376 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.606816 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.615503 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.630942 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-zkqs9"] Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.631230 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.636838 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.636825 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.638434 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.640969 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.641195 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.657383 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.689682 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.707036 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.711237 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e2941c0-a633-40af-902c-1304d8df18b0-serviceca\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.711285 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e2941c0-a633-40af-902c-1304d8df18b0-host\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.711306 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg4kc\" (UniqueName: \"kubernetes.io/projected/8e2941c0-a633-40af-902c-1304d8df18b0-kube-api-access-lg4kc\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.723421 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.738505 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.753364 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.767879 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.779942 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.792647 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.806790 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.812598 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg4kc\" (UniqueName: \"kubernetes.io/projected/8e2941c0-a633-40af-902c-1304d8df18b0-kube-api-access-lg4kc\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.812663 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e2941c0-a633-40af-902c-1304d8df18b0-serviceca\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.812690 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e2941c0-a633-40af-902c-1304d8df18b0-host\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.812766 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8e2941c0-a633-40af-902c-1304d8df18b0-host\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.814062 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8e2941c0-a633-40af-902c-1304d8df18b0-serviceca\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.821351 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.845482 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg4kc\" (UniqueName: \"kubernetes.io/projected/8e2941c0-a633-40af-902c-1304d8df18b0-kube-api-access-lg4kc\") pod \"node-ca-zkqs9\" (UID: \"8e2941c0-a633-40af-902c-1304d8df18b0\") " pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.877185 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.920376 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.929597 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:57:14.970787001 +0000 UTC Feb 16 14:53:17 crc kubenswrapper[4748]: I0216 14:53:17.966823 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:17Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.107534 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-zkqs9" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.199632 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606"} Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.202299 4748 generic.go:334] "Generic (PLEG): container finished" podID="81c4b8fe-3db6-4720-81d0-a1d2d33470bb" containerID="1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393" exitCode=0 Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.202357 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerDied","Data":"1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393"} Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.207648 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e"} Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.207706 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8"} Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.207747 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8"} Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.207761 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891"} Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.209951 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zkqs9" event={"ID":"8e2941c0-a633-40af-902c-1304d8df18b0","Type":"ContainerStarted","Data":"5e6431f622c230d89eaa832666f7d0fd1d7b609ce7888b0dc6446949f54ec470"} Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.213847 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.218315 4748 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.225744 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.240578 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.254112 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.271185 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.284557 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.300321 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.318467 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.341016 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.377093 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.418950 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.455746 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.497243 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.544968 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.576781 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.615348 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.621651 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.621766 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.621819 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.621860 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.621939 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.621975 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.621994 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.622007 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.622032 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:22.622013498 +0000 UTC m=+28.313682537 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.621948 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.622058 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:22.622042099 +0000 UTC m=+28.313711198 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.622086 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:22.62207926 +0000 UTC m=+28.313748289 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.622132 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:22.622124771 +0000 UTC m=+28.313793810 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.656756 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.704362 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.722450 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.722654 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.722687 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.722700 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.722793 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:22.722774554 +0000 UTC m=+28.414443603 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.737999 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.778127 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.819957 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.859765 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.897357 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.930481 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 20:35:52.655535314 +0000 UTC Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.939323 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.981260 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:18Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.994071 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.994134 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.994213 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:18 crc kubenswrapper[4748]: I0216 14:53:18.994301 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.994326 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:18 crc kubenswrapper[4748]: E0216 14:53:18.994502 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.021379 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.070799 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.106199 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.214619 4748 generic.go:334] "Generic (PLEG): container finished" podID="81c4b8fe-3db6-4720-81d0-a1d2d33470bb" containerID="b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae" exitCode=0 Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.214741 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerDied","Data":"b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae"} Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.219482 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01"} Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.219518 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354"} Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.221215 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-zkqs9" event={"ID":"8e2941c0-a633-40af-902c-1304d8df18b0","Type":"ContainerStarted","Data":"cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd"} Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.243782 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.284831 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.316105 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.333915 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.349534 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.367686 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.381787 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.418210 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.462210 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.504631 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.541824 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.581705 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.621122 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.661073 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.698162 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.748465 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.785387 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.821369 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.859828 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.901906 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.931625 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:59:26.815455644 +0000 UTC Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.943184 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:19 crc kubenswrapper[4748]: I0216 14:53:19.983129 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:19Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.031106 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.062337 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.101350 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.147929 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.181227 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.220580 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.227700 4748 generic.go:334] "Generic (PLEG): container finished" podID="81c4b8fe-3db6-4720-81d0-a1d2d33470bb" containerID="8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8" exitCode=0 Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.228540 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerDied","Data":"8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8"} Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.270205 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.302222 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.345686 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.380298 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.417755 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.463930 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.510551 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.538022 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.577575 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.625997 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.659474 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.699076 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.747226 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.789022 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.841996 4748 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.843849 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.843890 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.843903 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.844040 4748 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.858218 4748 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.858531 4748 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.860085 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.860123 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.860132 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.860148 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.860158 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:20Z","lastTransitionTime":"2026-02-16T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.875315 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.879868 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.879914 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.879929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.879947 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.879962 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:20Z","lastTransitionTime":"2026-02-16T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.894755 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.899293 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.899354 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.899372 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.899398 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.899415 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:20Z","lastTransitionTime":"2026-02-16T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.917169 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.928010 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.928240 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.928959 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.929111 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.929244 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:20Z","lastTransitionTime":"2026-02-16T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.931861 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:54:22.824992407 +0000 UTC Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.946249 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.950756 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.950794 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.950808 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.950825 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.950839 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:20Z","lastTransitionTime":"2026-02-16T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.972246 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:20Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.972487 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.975026 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.975063 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.975076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.975094 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.975108 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:20Z","lastTransitionTime":"2026-02-16T14:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.993599 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.993783 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.993605 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.994032 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:20 crc kubenswrapper[4748]: I0216 14:53:20.994030 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:20 crc kubenswrapper[4748]: E0216 14:53:20.994149 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.078875 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.078967 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.078992 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.079030 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.079052 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.181655 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.181701 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.181765 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.181788 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.181798 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.237463 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.240030 4748 generic.go:334] "Generic (PLEG): container finished" podID="81c4b8fe-3db6-4720-81d0-a1d2d33470bb" containerID="64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798" exitCode=0 Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.240128 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerDied","Data":"64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.255011 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.269189 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.285005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.285076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.285097 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.285125 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.285152 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.296753 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.314998 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.332930 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.349593 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.363119 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.377006 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.388388 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.388461 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.388477 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.388500 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.388513 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.389979 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.401825 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.414663 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.433779 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.448530 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.463737 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:21Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.492142 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.492222 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.492241 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.492271 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.492296 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.596125 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.596179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.596196 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.596219 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.596238 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.699851 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.699902 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.699916 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.699934 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.699947 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.803777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.803849 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.803874 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.803900 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.803924 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.908228 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.908291 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.908302 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.908320 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.908331 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:21Z","lastTransitionTime":"2026-02-16T14:53:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:21 crc kubenswrapper[4748]: I0216 14:53:21.932671 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 15:34:25.938982767 +0000 UTC Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.011669 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.011792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.011813 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.011853 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.011897 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.115449 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.115516 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.115541 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.115569 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.115595 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.217689 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.217776 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.217797 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.217817 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.217833 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.247487 4748 generic.go:334] "Generic (PLEG): container finished" podID="81c4b8fe-3db6-4720-81d0-a1d2d33470bb" containerID="de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66" exitCode=0 Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.247551 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerDied","Data":"de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.268440 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.289531 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.316197 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.320702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.320757 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.320766 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.320782 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.320792 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.340215 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.359208 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.373224 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.392244 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.409374 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.423743 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.429306 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.429349 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.429367 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.429392 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.429407 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.439329 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.457490 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.476328 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.492871 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.507532 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.533958 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.534009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.534026 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.534049 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.534063 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.637282 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.637363 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.637399 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.637424 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.637529 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.668165 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668372 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:30.668354492 +0000 UTC m=+36.360023531 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.668369 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.668422 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.668456 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668471 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668544 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:30.668527116 +0000 UTC m=+36.360196195 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668549 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668578 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:30.668571207 +0000 UTC m=+36.360240246 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668759 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668855 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668877 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.668966 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:30.668936036 +0000 UTC m=+36.360605275 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.741678 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.742217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.742227 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.742250 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.742289 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.769903 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.770154 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.770182 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.770201 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.770268 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:30.770245924 +0000 UTC m=+36.461914993 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.845978 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.846048 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.846100 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.846133 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.846154 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.933553 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 08:34:01.80480535 +0000 UTC Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.949405 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.949477 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.949497 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.949528 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.949544 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:22Z","lastTransitionTime":"2026-02-16T14:53:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.994358 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.994452 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:22 crc kubenswrapper[4748]: I0216 14:53:22.994606 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.994596 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.994842 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:22 crc kubenswrapper[4748]: E0216 14:53:22.994908 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.053049 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.053143 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.053167 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.053203 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.053244 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.157038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.157090 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.157101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.157122 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.157133 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.259589 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.259654 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.259671 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.259698 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.259744 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.260582 4748 generic.go:334] "Generic (PLEG): container finished" podID="81c4b8fe-3db6-4720-81d0-a1d2d33470bb" containerID="cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97" exitCode=0 Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.260695 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerDied","Data":"cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.270224 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.271184 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.271226 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.285437 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.314085 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.324813 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.324930 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.344829 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.360832 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.362130 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.362174 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.362186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.362205 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.362220 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.377400 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.393532 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.415212 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.434049 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.447492 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.469051 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.469107 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.469122 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.469144 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.469158 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.469915 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.486749 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.500730 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.517246 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.542903 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.558638 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.570248 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.573035 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.573071 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.573080 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.573095 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.573106 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.585161 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.600335 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.618340 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.633121 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.645686 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.663450 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.675659 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.675705 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.675731 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.675749 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.675762 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.684320 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.703113 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.755019 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.778721 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.778774 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.778792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.778813 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.778825 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.781574 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.810071 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.824494 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:23Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.881163 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.881224 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.881239 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.881261 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.881278 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.933846 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:42:54.802923057 +0000 UTC Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.985101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.985161 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.985180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.985207 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:23 crc kubenswrapper[4748]: I0216 14:53:23.985225 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:23Z","lastTransitionTime":"2026-02-16T14:53:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.092848 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.092902 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.092927 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.092945 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.093339 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.196832 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.196884 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.196897 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.196918 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.196929 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.279789 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.279786 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" event={"ID":"81c4b8fe-3db6-4720-81d0-a1d2d33470bb","Type":"ContainerStarted","Data":"25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.293639 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.300028 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.300070 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.300083 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.300101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.300112 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.307179 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.322196 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.341662 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.363951 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.381682 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.402442 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.403538 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.403592 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.403611 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.403643 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.403662 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.426460 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.449549 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.463129 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.477660 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.495751 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.506822 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.506869 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.506881 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.506901 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.506915 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.509909 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.524610 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:24Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.610277 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.610321 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.610333 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.610351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.610363 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.713736 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.713791 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.713805 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.713829 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.713842 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.817891 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.817983 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.818008 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.818048 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.818073 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.921731 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.921795 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.921814 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.921845 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.921871 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:24Z","lastTransitionTime":"2026-02-16T14:53:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.934310 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 17:46:46.555721475 +0000 UTC Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.949454 4748 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.993667 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.993815 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:24 crc kubenswrapper[4748]: E0216 14:53:24.993943 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:24 crc kubenswrapper[4748]: I0216 14:53:24.994060 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:24 crc kubenswrapper[4748]: E0216 14:53:24.994246 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:24 crc kubenswrapper[4748]: E0216 14:53:24.994488 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.025741 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.025793 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.025807 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.025831 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.025848 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.032010 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.056190 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.074111 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.089028 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.105837 4748 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.113217 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.128435 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.128486 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.128499 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.128523 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.128536 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.132887 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.166178 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.202438 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.222331 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.232325 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.232375 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.232404 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.232427 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.232439 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.240043 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.255109 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.264521 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.277191 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.283085 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.288913 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.335198 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.335240 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.335250 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.335271 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.335282 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.438040 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.438085 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.438096 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.438116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.438128 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.540982 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.541039 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.541052 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.541071 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.541086 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.644509 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.644563 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.644573 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.644593 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.644607 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.748236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.748302 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.748315 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.748335 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.748351 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.825567 4748 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.851565 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.851609 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.851638 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.851655 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.851667 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.935031 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 09:35:25.106758885 +0000 UTC Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.955234 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.955313 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.955329 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.955351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:25 crc kubenswrapper[4748]: I0216 14:53:25.955365 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:25Z","lastTransitionTime":"2026-02-16T14:53:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.059028 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.059090 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.059105 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.059131 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.059154 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.163189 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.163273 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.163604 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.163639 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.164151 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.267937 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.268012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.268028 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.268047 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.268060 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.289193 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/0.log" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.292607 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484" exitCode=1 Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.292673 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.294110 4748 scope.go:117] "RemoveContainer" containerID="58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.313913 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.332260 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.351300 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.371597 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.371634 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.371662 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.371683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.371695 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.383785 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.404169 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.423178 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.446146 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.473052 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.475214 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.475246 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.475257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.475276 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.475290 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.491990 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.514187 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.533607 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.549043 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.571556 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.585363 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.586016 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.586037 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.586056 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.586070 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.595233 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:26Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.695144 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.695193 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.695205 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.695228 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.695243 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.832504 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.832548 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.832563 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.832583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.832596 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.935253 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 22:50:31.725613226 +0000 UTC Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.935382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.935435 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.935448 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.935469 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.935486 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:26Z","lastTransitionTime":"2026-02-16T14:53:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.993862 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.993927 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:26 crc kubenswrapper[4748]: I0216 14:53:26.993987 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:26 crc kubenswrapper[4748]: E0216 14:53:26.994067 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:26 crc kubenswrapper[4748]: E0216 14:53:26.994214 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:26 crc kubenswrapper[4748]: E0216 14:53:26.994417 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.039087 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.039129 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.039137 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.039153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.039162 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.141847 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.141882 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.141892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.141904 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.141914 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.244336 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.244375 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.244386 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.244400 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.244411 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.299342 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/0.log" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.303389 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.303652 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.326082 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.343392 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.348068 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.348155 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.348174 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.348200 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.348218 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.357170 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.371102 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.394499 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.409964 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.426934 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.444929 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.450962 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.451066 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.451083 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.451104 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.451118 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.456182 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.468084 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.478061 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.489258 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.504897 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.516921 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:27Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.554072 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.554424 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.554502 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.554586 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.554654 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.657036 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.657101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.657120 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.657145 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.657168 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.759639 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.759753 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.759792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.759822 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.759840 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.863946 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.864035 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.864059 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.864099 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.864123 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.936200 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:09:36.648383622 +0000 UTC Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.967990 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.968050 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.968069 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.968097 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:27 crc kubenswrapper[4748]: I0216 14:53:27.968121 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:27Z","lastTransitionTime":"2026-02-16T14:53:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.070979 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.071018 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.071027 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.071044 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.071058 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.174213 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.174286 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.174302 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.174326 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.174342 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.278173 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.278222 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.278234 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.278256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.278272 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.310082 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/1.log" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.310998 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/0.log" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.315261 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2" exitCode=1 Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.315332 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.315419 4748 scope.go:117] "RemoveContainer" containerID="58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.317085 4748 scope.go:117] "RemoveContainer" containerID="4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2" Feb 16 14:53:28 crc kubenswrapper[4748]: E0216 14:53:28.317422 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.344563 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.358313 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966"] Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.359298 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.363995 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.364428 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.375465 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.381666 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.381803 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.381834 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.381874 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.381974 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.395054 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.411835 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.434184 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.456200 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.476341 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.485746 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.485830 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.485859 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.485898 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.485927 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.493586 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.510045 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.523803 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.544107 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.550385 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9cbbc92-8258-496d-b183-2321860c64cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.550471 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9cbbc92-8258-496d-b183-2321860c64cc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.550531 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnr54\" (UniqueName: \"kubernetes.io/projected/c9cbbc92-8258-496d-b183-2321860c64cc-kube-api-access-jnr54\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.550567 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9cbbc92-8258-496d-b183-2321860c64cc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.565743 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.588502 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.590851 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.590909 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.590929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.590958 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.590979 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.603907 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.624557 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.652419 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9cbbc92-8258-496d-b183-2321860c64cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.652488 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9cbbc92-8258-496d-b183-2321860c64cc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.652559 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnr54\" (UniqueName: \"kubernetes.io/projected/c9cbbc92-8258-496d-b183-2321860c64cc-kube-api-access-jnr54\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.652598 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9cbbc92-8258-496d-b183-2321860c64cc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.653844 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c9cbbc92-8258-496d-b183-2321860c64cc-env-overrides\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.653861 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c9cbbc92-8258-496d-b183-2321860c64cc-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.654366 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.664393 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c9cbbc92-8258-496d-b183-2321860c64cc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.674556 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.684227 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnr54\" (UniqueName: \"kubernetes.io/projected/c9cbbc92-8258-496d-b183-2321860c64cc-kube-api-access-jnr54\") pod \"ovnkube-control-plane-749d76644c-mv966\" (UID: \"c9cbbc92-8258-496d-b183-2321860c64cc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.694373 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.694587 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.694644 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.694663 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.694702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.694750 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.715200 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.745992 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.772891 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.797164 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.797422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.797494 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.797513 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.797544 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.797566 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.822000 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.855996 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.877203 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.892246 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.901211 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.901275 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.901287 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.901308 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.901322 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:28Z","lastTransitionTime":"2026-02-16T14:53:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.908246 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.927994 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.936811 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 17:25:00.140431286 +0000 UTC Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.948580 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:28Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.983859 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.994435 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.994435 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:28 crc kubenswrapper[4748]: I0216 14:53:28.994638 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:28 crc kubenswrapper[4748]: E0216 14:53:28.994822 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:28 crc kubenswrapper[4748]: E0216 14:53:28.995126 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:28 crc kubenswrapper[4748]: E0216 14:53:28.995207 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.003824 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.003896 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.003919 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.003943 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.003960 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: W0216 14:53:29.009478 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9cbbc92_8258_496d_b183_2321860c64cc.slice/crio-2b5756ca24af582e2a43cecc867f01a70784717b83bd5e1822e981b16dd7cde9 WatchSource:0}: Error finding container 2b5756ca24af582e2a43cecc867f01a70784717b83bd5e1822e981b16dd7cde9: Status 404 returned error can't find the container with id 2b5756ca24af582e2a43cecc867f01a70784717b83bd5e1822e981b16dd7cde9 Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.107540 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.107591 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.107602 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.107621 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.107633 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.211692 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.211750 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.211759 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.211776 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.211788 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.315108 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.315175 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.315192 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.315218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.315238 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.329644 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/1.log" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.340660 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" event={"ID":"c9cbbc92-8258-496d-b183-2321860c64cc","Type":"ContainerStarted","Data":"244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.341069 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" event={"ID":"c9cbbc92-8258-496d-b183-2321860c64cc","Type":"ContainerStarted","Data":"2b5756ca24af582e2a43cecc867f01a70784717b83bd5e1822e981b16dd7cde9"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.418105 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.418166 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.418188 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.418215 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.418233 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.521216 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.521268 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.521279 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.521299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.521312 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.624961 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.625038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.625058 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.625099 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.625118 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.728184 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.728232 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.728245 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.728264 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.728280 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.830521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.830581 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.830598 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.830624 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.830643 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.895328 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lll47"] Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.895878 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:29 crc kubenswrapper[4748]: E0216 14:53:29.895954 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.910200 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.923781 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.933785 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.933832 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.933847 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.933867 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.933880 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:29Z","lastTransitionTime":"2026-02-16T14:53:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.938412 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:12:21.005076582 +0000 UTC Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.953686 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.966187 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntt99\" (UniqueName: \"kubernetes.io/projected/078f98ca-d871-47a5-96c3-1e818312c4c4-kube-api-access-ntt99\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.966559 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:29 crc kubenswrapper[4748]: I0216 14:53:29.984055 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:29Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.009995 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.022442 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.034743 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.035886 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.035919 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.035929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.035944 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.035953 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.051406 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.066140 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.067671 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.067799 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntt99\" (UniqueName: \"kubernetes.io/projected/078f98ca-d871-47a5-96c3-1e818312c4c4-kube-api-access-ntt99\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.067886 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.067957 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:30.567935786 +0000 UTC m=+36.259604835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.078362 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.085557 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntt99\" (UniqueName: \"kubernetes.io/projected/078f98ca-d871-47a5-96c3-1e818312c4c4-kube-api-access-ntt99\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.095096 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.118166 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.130637 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.138855 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.138912 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.138927 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.138947 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.138960 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.149360 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.162267 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.177936 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.242521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.242903 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.243002 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.243086 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.243179 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.345556 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.345661 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.345689 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.345755 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.345781 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.348124 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" event={"ID":"c9cbbc92-8258-496d-b183-2321860c64cc","Type":"ContainerStarted","Data":"74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.372998 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.401075 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.421330 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.445920 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.448325 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.448370 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.448382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.448400 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.448413 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.468617 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.490688 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.509581 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.531484 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.551387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.551473 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.551498 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.551530 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.551558 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.567065 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.571814 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.571972 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.572034 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:31.572017199 +0000 UTC m=+37.263686238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.583985 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.608566 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.626790 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.641955 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.658481 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.658523 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.658537 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.658555 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.658571 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.663204 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.672492 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.672653 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.672693 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.672756 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.672877 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.672940 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:46.672921908 +0000 UTC m=+52.364590947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.672979 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.673078 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:53:46.673032141 +0000 UTC m=+52.364701180 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.673085 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.673173 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.673194 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.673142 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:46.673130083 +0000 UTC m=+52.364799122 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.673272 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:46.673250296 +0000 UTC m=+52.364919345 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.682127 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.697323 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:30Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.763822 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.763886 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.763906 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.763930 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.763949 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.774260 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.774543 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.774602 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.774623 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.774710 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:46.774680567 +0000 UTC m=+52.466349646 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.868113 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.868193 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.868216 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.868247 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.868272 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.939098 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:49:30.39063458 +0000 UTC Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.972053 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.972118 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.972131 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.972156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.972171 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:30Z","lastTransitionTime":"2026-02-16T14:53:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.994310 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.994352 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.994576 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:30 crc kubenswrapper[4748]: I0216 14:53:30.994353 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.994801 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:30 crc kubenswrapper[4748]: E0216 14:53:30.994950 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.050316 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.050393 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.050419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.050449 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.050505 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.068552 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:31Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.073941 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.074009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.074025 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.074054 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.074071 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.090047 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:31Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.095551 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.095622 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.095641 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.095665 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.095683 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.117776 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:31Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.122558 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.122600 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.122615 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.122637 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.122653 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.139318 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:31Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.144137 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.144216 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.144230 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.144257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.144275 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.165553 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:31Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:31Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.165750 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.167824 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.167872 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.167887 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.167908 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.167925 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.270905 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.270960 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.270976 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.271006 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.271022 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.374635 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.374690 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.374705 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.374757 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.374775 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.478153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.478198 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.478213 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.478236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.478252 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.582173 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.582425 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.582452 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.582519 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.582537 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.582564 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.582570 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:33.58253164 +0000 UTC m=+39.274200689 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.582584 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.685687 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.685792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.685810 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.685842 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.685862 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.790012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.790070 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.790082 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.790105 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.790123 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.893327 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.893393 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.893413 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.893439 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.893459 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.940169 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 12:09:38.973898128 +0000 UTC Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.994049 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:31 crc kubenswrapper[4748]: E0216 14:53:31.994890 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.997356 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.997418 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.997432 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.997466 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:31 crc kubenswrapper[4748]: I0216 14:53:31.997482 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:31Z","lastTransitionTime":"2026-02-16T14:53:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.100411 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.100493 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.100508 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.100533 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.100551 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.204200 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.204267 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.204289 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.204316 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.204335 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.307492 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.307594 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.307614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.307684 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.307711 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.410895 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.410968 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.410986 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.411009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.411026 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.514121 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.514192 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.514220 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.514248 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.514266 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.617921 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.617964 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.617975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.617991 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.618001 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.720698 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.720824 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.720845 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.720870 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.720899 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.823921 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.823966 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.823975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.823993 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.824007 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.927001 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.927059 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.927072 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.927091 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.927104 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:32Z","lastTransitionTime":"2026-02-16T14:53:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.940440 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:13:56.913106282 +0000 UTC Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.994134 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:32 crc kubenswrapper[4748]: E0216 14:53:32.994332 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.994475 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:32 crc kubenswrapper[4748]: E0216 14:53:32.994699 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:32 crc kubenswrapper[4748]: I0216 14:53:32.994915 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:32 crc kubenswrapper[4748]: E0216 14:53:32.995414 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.030618 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.030664 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.030678 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.030696 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.030725 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.134336 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.134390 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.134403 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.134425 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.134442 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.238315 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.238355 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.238367 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.238386 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.238396 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.342614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.342685 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.342709 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.342771 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.342793 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.445524 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.445601 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.445618 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.445642 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.445674 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.552316 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.552367 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.552384 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.552412 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.552431 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.604468 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:33 crc kubenswrapper[4748]: E0216 14:53:33.604680 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:33 crc kubenswrapper[4748]: E0216 14:53:33.604803 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:37.604778826 +0000 UTC m=+43.296447905 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.655546 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.655623 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.655653 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.655685 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.655703 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.759034 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.759078 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.759088 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.759103 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.759113 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.862993 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.863061 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.863089 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.863115 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.863131 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.941271 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 14:13:29.761382642 +0000 UTC Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.966254 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.966332 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.966364 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.966398 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.966420 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:33Z","lastTransitionTime":"2026-02-16T14:53:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:33 crc kubenswrapper[4748]: I0216 14:53:33.993750 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:33 crc kubenswrapper[4748]: E0216 14:53:33.993958 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.070005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.070075 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.070098 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.070131 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.070155 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.174091 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.174165 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.174179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.174206 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.174231 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.279356 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.279439 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.279453 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.279479 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.279496 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.381956 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.382425 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.382568 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.382741 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.382870 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.486913 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.486969 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.486986 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.487009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.487026 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.590297 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.590649 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.590780 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.590908 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.591019 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.694858 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.695201 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.695469 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.695686 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.695943 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.799278 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.799662 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.799929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.800097 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.800238 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.839759 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.858774 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.873352 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.894757 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.908454 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.909046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.909062 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.909090 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.909107 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:34Z","lastTransitionTime":"2026-02-16T14:53:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.915577 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.932692 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.941962 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:07:20.624645155 +0000 UTC Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.948948 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.966572 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.978426 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.993805 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.993939 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:34 crc kubenswrapper[4748]: E0216 14:53:34.994022 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:34 crc kubenswrapper[4748]: E0216 14:53:34.994130 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:34 crc kubenswrapper[4748]: I0216 14:53:34.994478 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:34 crc kubenswrapper[4748]: E0216 14:53:34.994895 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.001993 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:34Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.012507 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.012560 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.012580 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.012608 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.012633 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.014165 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.027957 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.042948 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.059654 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.072361 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.084494 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.101039 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.116098 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.116155 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.116175 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.116201 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.116220 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.121039 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.135533 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.149479 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.162781 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.177267 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.190437 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.208743 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.219574 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.219777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.219912 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.220018 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.220115 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.235992 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58b3fd00256812f1448f2349a4268eaad457daa2e7fa59490e3150c226dbb484\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:25Z\\\",\\\"message\\\":\\\" 6057 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0216 14:53:25.780224 6057 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:25.780225 6057 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:25.780266 6057 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:25.780262 6057 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:25.780239 6057 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:25.780282 6057 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:25.780315 6057 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0216 14:53:25.780354 6057 factory.go:656] Stopping watch factory\\\\nI0216 14:53:25.780363 6057 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0216 14:53:25.780371 6057 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:25.780379 6057 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:25.780443 6057 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0216 14:53:25.780474 6057 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.252475 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.272505 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.289279 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.307530 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.324525 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.324646 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.324695 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.324708 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.324747 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.324763 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.337665 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.350992 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.365558 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.428551 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.428630 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.428648 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.428675 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.428695 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.532103 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.532191 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.532218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.532260 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.532287 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.637077 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.637138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.637159 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.637187 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.637209 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.742887 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.742958 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.742975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.743007 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.743024 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.846974 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.847052 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.847410 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.847462 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.847482 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.942676 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 16:02:41.066996458 +0000 UTC Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.951294 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.951462 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.951488 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.951521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.951546 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:35Z","lastTransitionTime":"2026-02-16T14:53:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:35 crc kubenswrapper[4748]: I0216 14:53:35.993894 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:35 crc kubenswrapper[4748]: E0216 14:53:35.994189 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.054686 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.054768 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.054784 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.054806 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.054823 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.157750 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.157819 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.157845 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.157874 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.157894 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.261255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.261324 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.261342 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.261368 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.261389 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.364661 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.364705 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.364735 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.364749 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.364759 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.468162 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.468251 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.468286 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.468321 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.468344 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.572649 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.572781 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.572803 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.572830 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.572848 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.676770 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.676843 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.676864 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.676894 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.676914 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.780570 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.780634 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.780653 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.780679 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.780697 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.885947 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.886014 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.886032 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.886057 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.886078 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.943465 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 12:27:41.201443315 +0000 UTC Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.989366 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.989424 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.989443 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.989467 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.989485 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:36Z","lastTransitionTime":"2026-02-16T14:53:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.994158 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.994211 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:36 crc kubenswrapper[4748]: I0216 14:53:36.994330 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:36 crc kubenswrapper[4748]: E0216 14:53:36.994324 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:36 crc kubenswrapper[4748]: E0216 14:53:36.994505 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:36 crc kubenswrapper[4748]: E0216 14:53:36.994700 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.106695 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.106787 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.106807 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.106835 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.106855 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.211510 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.211600 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.211620 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.211656 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.211682 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.315219 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.315324 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.315382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.315411 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.315467 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.418532 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.418580 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.418595 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.418614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.418627 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.522201 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.522255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.522273 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.522300 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.522316 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.625882 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.625945 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.625962 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.625985 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.626002 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.654065 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:37 crc kubenswrapper[4748]: E0216 14:53:37.654271 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:37 crc kubenswrapper[4748]: E0216 14:53:37.654375 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:53:45.654344836 +0000 UTC m=+51.346013935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.730418 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.730483 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.730502 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.730529 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.730547 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.834108 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.834163 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.834180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.834205 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.834225 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.938131 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.938190 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.938210 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.938236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.938255 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:37Z","lastTransitionTime":"2026-02-16T14:53:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.944385 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:49:34.407689574 +0000 UTC Feb 16 14:53:37 crc kubenswrapper[4748]: I0216 14:53:37.993349 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:37 crc kubenswrapper[4748]: E0216 14:53:37.993617 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.041587 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.041666 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.041688 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.041739 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.041757 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.144919 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.144994 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.145012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.145036 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.145107 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.248149 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.248616 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.248839 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.249061 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.249229 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.353812 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.353895 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.353916 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.353944 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.353964 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.457107 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.457153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.457172 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.457195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.457212 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.561634 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.561683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.561695 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.561807 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.561820 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.664120 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.664177 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.664197 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.664220 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.664239 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.767795 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.767866 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.767884 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.767909 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.767928 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.870653 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.870751 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.870776 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.870803 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.870821 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.944739 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:27:57.015310138 +0000 UTC Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.973522 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.973584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.973603 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.973629 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.973648 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:38Z","lastTransitionTime":"2026-02-16T14:53:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.994328 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.994438 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:38 crc kubenswrapper[4748]: I0216 14:53:38.994486 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:38 crc kubenswrapper[4748]: E0216 14:53:38.994865 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:38 crc kubenswrapper[4748]: E0216 14:53:38.994638 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:38 crc kubenswrapper[4748]: E0216 14:53:38.994925 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.076577 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.077203 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.077378 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.077599 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.077809 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.180958 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.181026 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.181046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.181074 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.181092 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.284577 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.284652 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.284674 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.284702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.284756 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.387267 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.387361 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.387384 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.387417 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.387439 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.490120 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.490174 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.490192 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.490217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.490234 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.593645 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.593700 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.593791 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.593822 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.593838 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.697824 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.698025 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.698044 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.698067 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.698083 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.801869 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.801929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.801946 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.801971 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.801990 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.905157 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.905223 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.905240 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.905265 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.905286 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:39Z","lastTransitionTime":"2026-02-16T14:53:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.945766 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 13:32:05.508418283 +0000 UTC Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.993665 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:39 crc kubenswrapper[4748]: E0216 14:53:39.993881 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:39 crc kubenswrapper[4748]: I0216 14:53:39.996063 4748 scope.go:117] "RemoveContainer" containerID="4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.007269 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.007331 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.007349 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.007371 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.007387 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.016859 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.033449 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.048410 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.063103 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.077613 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.098145 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.110044 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.110089 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.110186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.110204 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.110219 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.113453 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.129380 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.168950 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.187008 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.202380 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.213409 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.213440 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.213455 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.213475 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.213489 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.220366 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.237254 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.254328 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.268830 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.282304 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.316861 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.316931 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.316949 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.316976 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.316995 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.392130 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/1.log" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.404967 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.405159 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.419574 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.419646 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.419671 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.419702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.419758 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.429233 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.447863 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.469083 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.494078 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.511374 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.522696 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.522798 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.522824 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.522855 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.522879 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.537465 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.573762 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.588855 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.612243 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.625394 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.625445 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.625456 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.625475 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.625490 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.639729 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.657767 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.677053 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.693339 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.709312 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.728183 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.728339 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.728473 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.728559 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.728643 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.730524 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.748441 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:40Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.833747 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.833826 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.833846 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.833870 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.833888 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.936663 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.936777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.936797 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.936823 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.936840 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:40Z","lastTransitionTime":"2026-02-16T14:53:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.946217 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:35:39.335500024 +0000 UTC Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.993777 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.993906 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:40 crc kubenswrapper[4748]: E0216 14:53:40.993930 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:40 crc kubenswrapper[4748]: E0216 14:53:40.994102 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:40 crc kubenswrapper[4748]: I0216 14:53:40.994326 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:40 crc kubenswrapper[4748]: E0216 14:53:40.994393 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.040063 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.040097 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.040106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.040128 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.040150 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.143038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.143491 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.143584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.143680 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.143791 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.208067 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.208112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.208124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.208143 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.208156 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.223629 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.228886 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.229133 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.229218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.229338 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.229421 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.244850 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.250069 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.250135 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.250155 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.250184 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.250203 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.264416 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.269148 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.269222 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.269252 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.269285 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.269309 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.289440 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.295062 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.295121 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.295141 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.295167 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.295187 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.307469 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.307696 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.309223 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.309262 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.309278 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.309296 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.309310 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.411062 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/2.log" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.411335 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.411388 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.411407 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.411434 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.411454 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.411959 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/1.log" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.415640 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0" exitCode=1 Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.415705 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.415790 4748 scope.go:117] "RemoveContainer" containerID="4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.417171 4748 scope.go:117] "RemoveContainer" containerID="cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0" Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.417509 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.448464 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.469942 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.495943 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.515066 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.515171 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.515252 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.515291 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.515365 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.519041 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.552504 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d516bd629d3ffea5f1c542b89f59168a21e3258166918add2c2a4e0e3f212f2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:27Z\\\",\\\"message\\\":\\\"erator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"5b85277d-d9b7-4a68-8e4e-2b80594d9347\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-operator\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.183\\\\\\\", Port:9001, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0216 14:53:27.302772 6181 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.572519 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.595381 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.616301 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.619600 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.619654 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.619676 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.619711 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.619772 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.637689 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.657633 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.668959 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.697760 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.713459 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.723226 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.723268 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.723285 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.723309 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.723321 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.728007 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.749065 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.762916 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:41Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.826066 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.826122 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.826134 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.826154 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.826169 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.930510 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.930595 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.930622 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.930674 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.930702 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:41Z","lastTransitionTime":"2026-02-16T14:53:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.946684 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 15:08:43.001142035 +0000 UTC Feb 16 14:53:41 crc kubenswrapper[4748]: I0216 14:53:41.994356 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:41 crc kubenswrapper[4748]: E0216 14:53:41.994548 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.033848 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.033909 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.033930 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.033957 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.033981 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.137395 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.137456 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.137468 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.137489 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.137505 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.241233 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.241308 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.241328 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.241355 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.241374 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.346647 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.346770 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.346793 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.346823 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.346844 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.427917 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/2.log" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.450936 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.451018 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.451039 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.451065 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.451088 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.555311 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.555373 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.555391 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.555416 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.555434 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.659412 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.659887 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.660025 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.660180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.660318 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.764262 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.764896 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.765079 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.765263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.765457 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.869101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.869170 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.869188 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.869239 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.869261 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.947487 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:06:48.701124506 +0000 UTC Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.972791 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.972870 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.972882 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.972908 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.972922 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:42Z","lastTransitionTime":"2026-02-16T14:53:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.993408 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.993471 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:42 crc kubenswrapper[4748]: I0216 14:53:42.993404 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:42 crc kubenswrapper[4748]: E0216 14:53:42.993576 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:42 crc kubenswrapper[4748]: E0216 14:53:42.993757 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:42 crc kubenswrapper[4748]: E0216 14:53:42.994035 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.076471 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.076543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.076561 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.076589 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.076606 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.077590 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.079008 4748 scope.go:117] "RemoveContainer" containerID="cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0" Feb 16 14:53:43 crc kubenswrapper[4748]: E0216 14:53:43.079278 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.105392 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.127142 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.148913 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.168116 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.180009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.180068 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.180088 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.180116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.180136 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.196271 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.216035 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.248644 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.263790 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.283410 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.284608 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.284683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.284698 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.284740 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.284757 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.304544 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.327382 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.348822 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.364880 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.386045 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.388549 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.388603 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.388613 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.388633 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.388650 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.406930 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.419370 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.492357 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.492419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.492431 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.492452 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.492466 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.596748 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.596815 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.596830 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.596852 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.596870 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.700303 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.700357 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.700374 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.700399 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.700420 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.804882 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.804950 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.804963 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.804985 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.805000 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.908139 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.908224 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.908259 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.908292 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.908313 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:43Z","lastTransitionTime":"2026-02-16T14:53:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.948797 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:41:05.512048446 +0000 UTC Feb 16 14:53:43 crc kubenswrapper[4748]: I0216 14:53:43.993485 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:43 crc kubenswrapper[4748]: E0216 14:53:43.993819 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.011361 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.011419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.011437 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.011459 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.011475 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.115830 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.115896 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.115911 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.115999 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.116019 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.219441 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.219504 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.219522 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.219545 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.219562 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.323217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.323280 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.323308 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.323338 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.323363 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.426557 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.426624 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.426643 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.426669 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.426687 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.529612 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.529677 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.529694 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.529749 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.529769 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.633706 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.633792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.633810 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.633834 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.633853 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.737836 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.737952 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.737972 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.737998 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.738016 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.841271 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.841354 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.841378 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.841409 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.841432 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.943996 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.944090 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.944112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.944138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.944154 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:44Z","lastTransitionTime":"2026-02-16T14:53:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.949521 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:24:13.774435456 +0000 UTC Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.994103 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.994131 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:44 crc kubenswrapper[4748]: I0216 14:53:44.994267 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:44 crc kubenswrapper[4748]: E0216 14:53:44.994480 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:44 crc kubenswrapper[4748]: E0216 14:53:44.994585 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:44 crc kubenswrapper[4748]: E0216 14:53:44.994768 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.013182 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.034834 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.050299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.050358 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.050381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.050412 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.050436 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.055699 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.075311 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.096826 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.113091 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.135420 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.153040 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.153094 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.153111 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.153139 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.153164 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.171868 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.193636 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.215407 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.238770 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.256640 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.256761 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.256781 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.257180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.257216 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.265275 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.288351 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.308857 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.329515 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.344788 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.371521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.371565 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.371577 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.371595 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.371607 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.474307 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.474404 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.474422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.474448 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.474467 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.577969 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.578070 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.578086 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.578111 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.578131 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.681388 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.681446 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.681464 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.681488 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.681504 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.750044 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:45 crc kubenswrapper[4748]: E0216 14:53:45.750233 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:45 crc kubenswrapper[4748]: E0216 14:53:45.750316 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:01.75029276 +0000 UTC m=+67.441961839 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.785084 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.785133 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.785145 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.785171 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.785187 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.888530 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.888583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.888600 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.888627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.888646 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.950149 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:15:26.839042379 +0000 UTC Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.992234 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.992315 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.992340 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.992372 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.992394 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:45Z","lastTransitionTime":"2026-02-16T14:53:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:45 crc kubenswrapper[4748]: I0216 14:53:45.993473 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:45 crc kubenswrapper[4748]: E0216 14:53:45.993654 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.096747 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.096818 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.096835 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.096863 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.096883 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.200048 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.200128 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.200146 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.200179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.200198 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.303583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.303677 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.303698 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.303763 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.303784 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.406470 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.406525 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.406586 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.406644 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.406662 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.509687 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.509776 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.509800 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.509825 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.509842 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.612448 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.612516 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.612539 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.612567 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.612590 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.715901 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.715972 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.715996 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.716024 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.716047 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.761703 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.761935 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.762000 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762082 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.762046352 +0000 UTC m=+84.453715431 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762116 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762175 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.762157275 +0000 UTC m=+84.453826344 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.762202 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762298 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762311 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762345 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762373 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762348 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.762335419 +0000 UTC m=+84.454004488 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.762453 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.762431212 +0000 UTC m=+84.454100291 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.819563 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.819614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.819627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.819648 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.819664 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.863611 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.863809 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.863834 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.863849 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.863995 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:18.863886064 +0000 UTC m=+84.555555123 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.922405 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.922463 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.922475 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.922493 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.922506 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:46Z","lastTransitionTime":"2026-02-16T14:53:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.951385 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 07:45:43.964715461 +0000 UTC Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.994049 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.994180 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.994398 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.994200 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:46 crc kubenswrapper[4748]: I0216 14:53:46.994699 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:46 crc kubenswrapper[4748]: E0216 14:53:46.994892 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.025156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.025198 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.025208 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.025222 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.025232 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.129150 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.129218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.129236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.129261 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.129280 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.231627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.231705 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.231758 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.231792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.231869 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.334270 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.334355 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.334395 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.334414 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.334426 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.437550 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.437612 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.437626 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.437646 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.437661 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.542088 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.542546 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.542570 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.542601 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.542629 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.645537 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.645589 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.645607 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.645628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.645644 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.749146 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.749227 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.749253 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.749288 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.749314 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.851836 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.851897 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.851916 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.851940 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.851959 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.952580 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:31:46.868873505 +0000 UTC Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.954849 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.954892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.954908 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.954938 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.954957 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:47Z","lastTransitionTime":"2026-02-16T14:53:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:47 crc kubenswrapper[4748]: I0216 14:53:47.994312 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:47 crc kubenswrapper[4748]: E0216 14:53:47.994560 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.058134 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.058200 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.058210 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.058238 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.058262 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.166297 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.166351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.166376 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.166404 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.166428 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.269143 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.269195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.269211 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.269230 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.269246 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.372526 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.372595 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.372608 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.372622 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.372630 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.476116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.476185 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.476207 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.476235 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.476258 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.579410 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.579495 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.579534 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.579569 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.579592 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.682807 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.682882 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.682907 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.682935 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.682956 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.785570 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.785631 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.785649 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.785675 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.785695 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.889164 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.889227 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.889244 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.889268 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.889285 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.953479 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 21:01:04.94232754 +0000 UTC Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.992023 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.992099 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.992124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.992153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.992177 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:48Z","lastTransitionTime":"2026-02-16T14:53:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.993474 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.993523 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:48 crc kubenswrapper[4748]: E0216 14:53:48.993620 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:48 crc kubenswrapper[4748]: I0216 14:53:48.993654 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:48 crc kubenswrapper[4748]: E0216 14:53:48.993862 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:48 crc kubenswrapper[4748]: E0216 14:53:48.994054 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.095428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.095470 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.095481 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.095501 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.095517 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.203230 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.203311 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.203334 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.203365 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.203388 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.306086 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.306123 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.306134 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.306151 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.306163 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.420024 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.420073 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.420087 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.420109 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.420122 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.522905 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.522946 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.522958 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.522976 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.522990 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.625341 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.625405 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.625422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.625446 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.625464 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.728543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.728603 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.728625 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.728652 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.728675 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.831386 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.831449 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.831466 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.831495 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.831513 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.934107 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.934167 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.934206 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.934236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.934259 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:49Z","lastTransitionTime":"2026-02-16T14:53:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.954099 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:12:36.19503121 +0000 UTC Feb 16 14:53:49 crc kubenswrapper[4748]: I0216 14:53:49.993763 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:49 crc kubenswrapper[4748]: E0216 14:53:49.993930 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.037626 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.037689 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.037707 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.037754 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.037771 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.141708 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.141787 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.141799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.141819 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.141832 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.244871 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.244960 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.244979 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.245004 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.245022 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.349841 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.349892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.349904 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.349924 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.349939 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.453299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.453392 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.453448 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.453472 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.453525 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.458561 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.475881 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.485540 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.507361 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.528457 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.552927 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.556360 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.556398 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.556410 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.556426 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.556436 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.569107 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.588233 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.611852 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.629113 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.645249 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.659147 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.659186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.659197 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.659213 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.659223 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.662227 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.684893 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.702552 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.724825 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.742961 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.766852 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.767521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.767568 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.767644 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.767683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.767705 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.785043 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:50Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.872923 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.872994 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.873012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.873038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.873060 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.954737 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:31:06.140157173 +0000 UTC Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.976951 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.977042 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.977057 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.977074 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.977086 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:50Z","lastTransitionTime":"2026-02-16T14:53:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.993618 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.993693 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:50 crc kubenswrapper[4748]: I0216 14:53:50.993619 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:50 crc kubenswrapper[4748]: E0216 14:53:50.993791 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:50 crc kubenswrapper[4748]: E0216 14:53:50.993903 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:50 crc kubenswrapper[4748]: E0216 14:53:50.993998 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.080400 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.080468 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.080487 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.080517 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.080537 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.184084 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.184136 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.184148 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.184166 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.184178 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.287875 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.287931 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.287942 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.287962 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.287975 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.391312 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.391371 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.391390 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.391413 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.391435 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.493769 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.493816 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.493825 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.493840 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.493850 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.596484 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.596543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.596552 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.596566 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.596595 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.691783 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.691846 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.691864 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.691893 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.691935 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: E0216 14:53:51.714949 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.719695 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.719782 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.719802 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.719828 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.719846 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: E0216 14:53:51.739399 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.744790 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.744885 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.744907 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.744933 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.744951 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: E0216 14:53:51.765399 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.770475 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.770544 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.770591 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.770627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.770654 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: E0216 14:53:51.790662 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.796227 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.796278 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.796296 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.796320 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.796343 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: E0216 14:53:51.813342 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:51Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:51 crc kubenswrapper[4748]: E0216 14:53:51.813570 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.820904 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.820981 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.821007 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.821046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.821064 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.923997 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.924070 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.924089 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.924116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.924135 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:51Z","lastTransitionTime":"2026-02-16T14:53:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.955822 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:52:00.618791046 +0000 UTC Feb 16 14:53:51 crc kubenswrapper[4748]: I0216 14:53:51.993764 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:51 crc kubenswrapper[4748]: E0216 14:53:51.993955 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.027531 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.027604 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.027628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.027656 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.027678 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.131241 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.131305 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.131327 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.131351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.131382 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.234096 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.234177 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.234201 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.234227 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.234245 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.337440 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.337491 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.337509 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.337533 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.337550 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.439821 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.439967 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.439992 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.440078 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.440102 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.543269 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.543313 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.543327 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.543347 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.543362 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.645926 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.645975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.645990 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.646005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.646017 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.748867 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.748913 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.748926 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.748941 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.748953 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.851809 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.851871 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.851889 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.851912 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.851930 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.955624 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.955700 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.955748 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.955773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.955791 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:52Z","lastTransitionTime":"2026-02-16T14:53:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.955996 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 10:57:00.214338974 +0000 UTC Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.994128 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.994188 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:52 crc kubenswrapper[4748]: I0216 14:53:52.994277 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:52 crc kubenswrapper[4748]: E0216 14:53:52.994404 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:52 crc kubenswrapper[4748]: E0216 14:53:52.994803 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:52 crc kubenswrapper[4748]: E0216 14:53:52.994889 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.058369 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.058419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.058435 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.058461 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.058479 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.161826 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.161885 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.161903 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.161927 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.161958 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.265266 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.265339 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.265356 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.265382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.265400 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.369052 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.369110 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.369127 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.369150 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.369167 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.472178 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.472241 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.472256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.472277 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.472290 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.575334 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.575390 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.575426 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.575447 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.575460 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.677794 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.677864 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.677884 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.677909 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.677927 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.781091 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.781185 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.781216 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.781260 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.781284 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.884205 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.884251 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.884262 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.884279 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.884289 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.956961 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:35:37.036380982 +0000 UTC Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.987122 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.987172 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.987198 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.987226 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.987247 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:53Z","lastTransitionTime":"2026-02-16T14:53:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:53 crc kubenswrapper[4748]: I0216 14:53:53.993798 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:53 crc kubenswrapper[4748]: E0216 14:53:53.993977 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.090112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.090144 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.090156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.090171 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.090183 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.194331 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.194423 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.194450 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.194482 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.194517 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.298223 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.298290 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.298308 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.298334 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.298354 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.401574 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.401642 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.401661 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.401684 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.401701 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.505381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.505572 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.505599 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.505624 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.505642 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.609031 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.609099 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.609116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.609140 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.609159 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.711330 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.711401 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.711419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.711446 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.711467 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.815364 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.815413 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.815423 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.815439 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.815452 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.919683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.919762 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.919777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.919801 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.919815 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:54Z","lastTransitionTime":"2026-02-16T14:53:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.957897 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 04:42:28.969772624 +0000 UTC Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.993425 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.993489 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:54 crc kubenswrapper[4748]: I0216 14:53:54.993426 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:54 crc kubenswrapper[4748]: E0216 14:53:54.993574 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:54 crc kubenswrapper[4748]: E0216 14:53:54.993690 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:54 crc kubenswrapper[4748]: E0216 14:53:54.993814 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.015845 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.023017 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.023067 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.023082 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.023103 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.023117 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.031923 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.045494 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.059533 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.086572 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.101128 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.121769 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.126249 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.126336 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.126368 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.126402 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.126429 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.140889 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.159364 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.175329 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.189936 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.204927 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.220103 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.230068 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.230130 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.230149 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.230180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.230202 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.241792 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.259478 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.280326 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.298065 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:53:55Z is after 2025-08-24T17:21:41Z" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.333042 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.333094 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.333112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.333141 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.333165 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.435907 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.435955 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.435971 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.435995 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.436013 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.539617 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.539686 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.539710 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.539788 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.539805 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.648690 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.648814 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.648838 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.648864 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.648883 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.751839 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.751883 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.751894 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.751909 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.751920 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.855253 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.855321 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.855341 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.855366 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.855385 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.957773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.957833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.957851 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.957874 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.957894 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:55Z","lastTransitionTime":"2026-02-16T14:53:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.958055 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:09:44.816199602 +0000 UTC Feb 16 14:53:55 crc kubenswrapper[4748]: I0216 14:53:55.993540 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:55 crc kubenswrapper[4748]: E0216 14:53:55.993821 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.061486 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.061556 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.061575 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.061600 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.061621 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.164168 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.164232 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.164250 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.164272 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.164291 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.267597 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.268100 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.268126 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.268153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.268171 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.370952 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.371015 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.371033 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.371059 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.371078 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.474215 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.474271 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.474282 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.474300 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.474313 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.577385 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.577465 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.577484 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.577508 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.577529 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.681484 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.681551 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.681568 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.681596 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.681613 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.785064 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.785136 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.785153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.785180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.785207 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.888195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.888258 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.888275 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.888300 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.888317 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.959160 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 18:19:28.81306119 +0000 UTC Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.990882 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.990936 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.990953 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.990977 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.990995 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:56Z","lastTransitionTime":"2026-02-16T14:53:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.993315 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.993371 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:56 crc kubenswrapper[4748]: I0216 14:53:56.993319 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:56 crc kubenswrapper[4748]: E0216 14:53:56.993446 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:56 crc kubenswrapper[4748]: E0216 14:53:56.993540 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:56 crc kubenswrapper[4748]: E0216 14:53:56.993641 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.094106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.094178 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.094193 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.094210 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.094246 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.197763 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.197892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.197962 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.197995 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.198022 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.301378 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.301443 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.301455 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.301494 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.301510 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.404664 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.404779 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.404799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.404827 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.404846 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.507383 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.507459 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.507489 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.507523 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.507546 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.610379 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.610420 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.610454 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.610474 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.610488 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.713141 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.713194 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.713207 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.713227 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.713240 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.817214 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.817331 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.817356 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.817385 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.817412 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.923833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.923892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.923930 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.924110 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.924145 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:57Z","lastTransitionTime":"2026-02-16T14:53:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.960268 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 19:51:06.183276285 +0000 UTC Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.994149 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:57 crc kubenswrapper[4748]: E0216 14:53:57.994890 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:53:57 crc kubenswrapper[4748]: I0216 14:53:57.995503 4748 scope.go:117] "RemoveContainer" containerID="cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0" Feb 16 14:53:57 crc kubenswrapper[4748]: E0216 14:53:57.996193 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.028333 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.028408 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.028433 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.028465 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.028486 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.132172 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.132221 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.132237 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.132256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.132268 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.234503 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.234547 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.234559 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.234575 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.234586 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.337571 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.337633 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.337649 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.337675 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.337694 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.440348 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.440415 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.440438 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.440460 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.440473 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.542937 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.543032 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.543067 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.543101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.543123 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.645828 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.645918 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.645936 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.645955 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.645968 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.749276 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.749336 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.749356 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.749381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.749397 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.852426 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.852500 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.852518 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.852543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.852561 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.955078 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.955143 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.955162 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.955187 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.955207 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:58Z","lastTransitionTime":"2026-02-16T14:53:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.960851 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:56:50.506627087 +0000 UTC Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.993641 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.993755 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:53:58 crc kubenswrapper[4748]: E0216 14:53:58.993823 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:53:58 crc kubenswrapper[4748]: I0216 14:53:58.993840 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:53:58 crc kubenswrapper[4748]: E0216 14:53:58.993919 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:53:58 crc kubenswrapper[4748]: E0216 14:53:58.993997 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.058377 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.058430 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.058440 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.058456 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.058472 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.161593 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.161654 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.161668 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.161687 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.161701 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.265034 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.265386 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.265487 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.265583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.265672 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.368844 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.369352 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.369791 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.370079 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.370252 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.472462 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.472528 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.472539 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.472560 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.472576 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.575116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.575168 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.575186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.575209 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.575227 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.677325 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.677380 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.677393 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.677416 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.677429 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.780542 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.780594 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.780605 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.780623 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.780635 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.883085 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.883147 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.883159 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.883177 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.883189 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.961648 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 15:49:22.577750058 +0000 UTC Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.985469 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.985500 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.985523 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.985538 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.985550 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:53:59Z","lastTransitionTime":"2026-02-16T14:53:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:53:59 crc kubenswrapper[4748]: I0216 14:53:59.993858 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:53:59 crc kubenswrapper[4748]: E0216 14:53:59.993985 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.088075 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.088139 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.088154 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.088169 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.088182 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.191209 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.191283 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.191305 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.191332 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.191353 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.293968 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.294019 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.294035 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.294056 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.294073 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.396654 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.396774 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.396802 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.396831 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.396851 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.499664 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.499728 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.499738 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.499756 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.499766 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.602653 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.602772 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.602791 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.602816 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.602832 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.705320 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.705371 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.705387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.705404 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.705414 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.807745 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.807781 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.807793 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.807810 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.807822 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.909224 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.909257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.909268 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.909283 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.909294 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:00Z","lastTransitionTime":"2026-02-16T14:54:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.961984 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 18:31:04.892377933 +0000 UTC Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.994360 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.994417 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:00 crc kubenswrapper[4748]: I0216 14:54:00.994403 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:00 crc kubenswrapper[4748]: E0216 14:54:00.994522 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:00 crc kubenswrapper[4748]: E0216 14:54:00.994645 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:00 crc kubenswrapper[4748]: E0216 14:54:00.994975 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.006236 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.011976 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.012014 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.012024 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.012038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.012048 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.114583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.114615 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.114623 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.114633 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.114660 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.217564 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.217623 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.217639 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.217662 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.217681 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.320588 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.320658 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.320670 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.320693 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.320706 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.423794 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.423873 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.423895 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.423922 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.423940 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.526773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.527150 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.527168 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.527191 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.527206 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.629799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.629967 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.629981 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.629998 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.630011 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.732627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.732693 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.732736 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.732763 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.732782 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.759936 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:01 crc kubenswrapper[4748]: E0216 14:54:01.760184 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:01 crc kubenswrapper[4748]: E0216 14:54:01.760395 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:54:33.760366844 +0000 UTC m=+99.452035923 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.836422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.836493 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.836511 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.836536 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.836556 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.940307 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.940366 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.940390 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.940416 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.940426 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:01Z","lastTransitionTime":"2026-02-16T14:54:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.962082 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 19:44:50.766097424 +0000 UTC Feb 16 14:54:01 crc kubenswrapper[4748]: I0216 14:54:01.993514 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:01 crc kubenswrapper[4748]: E0216 14:54:01.993651 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.028593 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.028650 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.028670 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.028695 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.028744 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.044165 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.051478 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.051522 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.051531 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.051545 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.051556 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.068350 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.073517 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.073579 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.073601 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.073629 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.073649 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.088298 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.093994 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.094076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.094103 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.094139 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.094163 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.116417 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.124450 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.124569 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.124592 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.124986 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.125297 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.145491 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:02Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.145760 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.149482 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.149531 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.149541 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.149559 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.149571 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.252659 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.252702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.252715 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.252744 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.252759 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.355341 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.355366 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.355374 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.355389 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.355398 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.457547 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.457613 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.457634 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.457747 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.457770 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.560154 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.560243 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.560264 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.560294 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.560315 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.663299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.663352 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.663364 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.663386 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.663398 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.765775 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.765809 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.765817 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.765829 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.765838 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.869068 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.869140 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.869161 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.869193 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.869212 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.963305 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 20:25:37.507339322 +0000 UTC Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.971806 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.971866 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.971877 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.971896 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.971907 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:02Z","lastTransitionTime":"2026-02-16T14:54:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.994312 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.994340 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.994447 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:02 crc kubenswrapper[4748]: I0216 14:54:02.994465 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.994552 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:02 crc kubenswrapper[4748]: E0216 14:54:02.994611 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.075025 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.075055 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.075066 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.075081 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.075091 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.177177 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.177215 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.177226 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.177240 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.177249 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.280606 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.280641 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.280651 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.280667 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.280678 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.384079 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.384129 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.384140 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.384156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.384170 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.487818 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.487910 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.487942 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.487984 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.488018 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.503060 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/0.log" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.503117 4748 generic.go:334] "Generic (PLEG): container finished" podID="1724aef8-25e0-40aa-86be-2ca7849960f1" containerID="e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa" exitCode=1 Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.503161 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerDied","Data":"e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.503606 4748 scope.go:117] "RemoveContainer" containerID="e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.516180 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.531382 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.542610 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.556837 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.571067 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.579820 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.591781 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.592189 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.592236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.592255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.592283 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.592296 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.612549 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.624785 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.643747 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.657647 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.672941 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.684915 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.694836 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.694971 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.695058 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.695134 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.695695 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.697359 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.710367 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.721278 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.734363 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.747223 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:03Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.797994 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.798043 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.798055 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.798075 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.798090 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.900422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.900628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.900785 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.900889 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.900953 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:03Z","lastTransitionTime":"2026-02-16T14:54:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.963782 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:22:44.626851291 +0000 UTC Feb 16 14:54:03 crc kubenswrapper[4748]: I0216 14:54:03.993876 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:03 crc kubenswrapper[4748]: E0216 14:54:03.994018 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.003997 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.004243 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.004309 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.004382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.004445 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.107242 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.107572 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.107667 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.107777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.107851 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.210859 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.211181 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.211246 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.211324 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.211415 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.314950 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.315048 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.315077 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.315112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.315144 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.418620 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.418671 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.418681 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.418716 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.418747 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.508472 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/0.log" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.508538 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerStarted","Data":"0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.520902 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.520975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.520988 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.521012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.521029 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.537317 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.557091 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.573044 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.591455 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.610184 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.622923 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.625038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.625094 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.625116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.625147 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.625171 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.635876 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.650359 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.676803 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.694472 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.712983 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.727659 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.727696 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.727709 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.727755 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.727768 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.730982 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.751230 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.776142 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.792463 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.812784 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.826731 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.831135 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.831168 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.831177 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.831194 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.831206 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.846896 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:04Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.935309 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.935357 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.935372 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.935388 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.935397 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:04Z","lastTransitionTime":"2026-02-16T14:54:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.964747 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:05:02.862952165 +0000 UTC Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.994401 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.994483 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:04 crc kubenswrapper[4748]: I0216 14:54:04.994445 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:04 crc kubenswrapper[4748]: E0216 14:54:04.994634 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:04 crc kubenswrapper[4748]: E0216 14:54:04.994813 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:04 crc kubenswrapper[4748]: E0216 14:54:04.994883 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.010025 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.031141 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.038016 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.038070 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.038081 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.038097 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.038109 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.053620 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.068775 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.084826 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.100175 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.118330 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.133408 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.142570 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.142616 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.142626 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.142642 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.142657 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.145961 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.159894 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.181789 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.195239 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.213915 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.226822 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.238997 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.245943 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.246014 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.246043 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.246081 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.246110 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.259300 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.275431 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.287035 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:05Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.349303 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.349389 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.349407 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.349432 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.349448 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.452426 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.452477 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.452486 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.452504 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.452516 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.555156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.555223 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.555244 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.555274 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.555296 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.657663 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.657695 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.657704 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.657739 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.657750 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.760222 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.760254 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.760263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.760277 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.760287 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.862696 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.862745 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.862753 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.862767 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.862775 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.965001 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 20:42:53.588389132 +0000 UTC Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.965876 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.965975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.966058 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.966135 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.966198 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:05Z","lastTransitionTime":"2026-02-16T14:54:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:05 crc kubenswrapper[4748]: I0216 14:54:05.993700 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:05 crc kubenswrapper[4748]: E0216 14:54:05.994174 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.068569 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.068628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.068638 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.068652 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.068661 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.170902 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.170937 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.170945 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.170957 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.170965 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.275255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.275313 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.275339 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.275370 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.275393 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.378956 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.379037 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.379061 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.379092 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.379119 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.483242 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.483310 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.483330 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.483356 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.483397 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.586762 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.586808 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.586819 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.586837 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.586849 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.689741 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.689777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.689786 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.689798 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.689809 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.792455 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.792580 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.792605 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.792628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.792645 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.896775 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.896848 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.896861 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.896878 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.896912 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.965438 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 23:43:17.279812414 +0000 UTC Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.993910 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.993984 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:06 crc kubenswrapper[4748]: E0216 14:54:06.994043 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.994065 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:06 crc kubenswrapper[4748]: E0216 14:54:06.994173 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:06 crc kubenswrapper[4748]: E0216 14:54:06.994228 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.998876 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.998929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.998944 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.998960 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:06 crc kubenswrapper[4748]: I0216 14:54:06.998974 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:06Z","lastTransitionTime":"2026-02-16T14:54:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.103519 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.103582 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.103602 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.103646 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.103664 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.206611 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.206654 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.206666 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.206684 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.206698 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.309962 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.310063 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.310087 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.310121 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.310145 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.414394 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.414453 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.414472 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.414496 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.414517 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.516966 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.517008 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.517019 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.517033 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.517041 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.620650 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.620707 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.620764 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.620797 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.620823 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.723885 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.723955 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.723973 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.724000 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.724017 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.826914 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.826978 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.827001 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.827030 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.827052 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.930678 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.930749 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.930763 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.930783 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.930839 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:07Z","lastTransitionTime":"2026-02-16T14:54:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.966350 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 01:39:17.599285754 +0000 UTC Feb 16 14:54:07 crc kubenswrapper[4748]: I0216 14:54:07.994145 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:07 crc kubenswrapper[4748]: E0216 14:54:07.994431 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.033689 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.033745 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.033757 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.033773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.033785 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.138050 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.138106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.138118 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.138139 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.138156 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.241449 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.241495 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.241513 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.241537 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.241558 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.344680 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.344753 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.344769 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.344788 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.344805 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.447170 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.447252 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.447276 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.447313 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.447340 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.549616 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.549673 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.549690 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.549749 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.549769 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.652661 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.652775 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.652795 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.652825 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.652843 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.754843 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.754894 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.754908 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.754928 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.754942 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.857166 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.857222 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.857236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.857256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.857274 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.960546 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.960632 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.960651 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.961071 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.961356 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:08Z","lastTransitionTime":"2026-02-16T14:54:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.966833 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 10:13:34.993419776 +0000 UTC Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.994222 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.994320 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:08 crc kubenswrapper[4748]: I0216 14:54:08.994343 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:08 crc kubenswrapper[4748]: E0216 14:54:08.994474 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:08 crc kubenswrapper[4748]: E0216 14:54:08.994577 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:08 crc kubenswrapper[4748]: E0216 14:54:08.994782 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.064690 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.064754 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.064774 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.064793 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.064808 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.168174 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.169929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.170503 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.170573 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.170824 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.273060 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.273103 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.273111 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.273124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.273134 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.374827 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.374859 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.374869 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.374885 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.374896 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.477492 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.477611 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.477629 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.477649 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.477662 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.580461 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.580525 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.580545 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.580570 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.580591 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.683831 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.683907 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.683927 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.683953 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.683974 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.787431 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.787523 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.787540 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.787565 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.787583 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.891856 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.891910 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.891922 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.891943 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.891958 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.967868 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 20:22:41.718134344 +0000 UTC Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.993453 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:09 crc kubenswrapper[4748]: E0216 14:54:09.993659 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.995042 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.995094 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.995110 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.995131 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:09 crc kubenswrapper[4748]: I0216 14:54:09.995147 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:09Z","lastTransitionTime":"2026-02-16T14:54:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.098676 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.098759 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.098778 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.098802 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.098820 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.202444 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.202513 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.202530 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.202556 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.202577 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.306233 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.306334 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.306357 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.306419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.306441 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.409635 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.409688 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.409708 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.409775 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.409800 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.513773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.513835 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.513851 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.513880 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.513897 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.617102 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.617137 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.617147 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.617165 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.617177 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.719926 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.720046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.720101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.720138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.720192 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.823603 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.823670 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.823693 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.823773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.823804 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.927571 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.927643 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.927666 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.927697 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.927756 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:10Z","lastTransitionTime":"2026-02-16T14:54:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.968112 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:06:17.152351284 +0000 UTC Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.994083 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.994146 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:10 crc kubenswrapper[4748]: E0216 14:54:10.994320 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:10 crc kubenswrapper[4748]: I0216 14:54:10.994359 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:10 crc kubenswrapper[4748]: E0216 14:54:10.994604 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:10 crc kubenswrapper[4748]: E0216 14:54:10.994840 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.030245 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.030302 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.030321 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.030346 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.030367 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.133534 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.133608 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.133628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.133656 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.133677 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.236193 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.236252 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.236266 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.236289 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.236307 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.340645 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.340753 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.340771 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.340796 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.340814 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.444223 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.444280 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.444301 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.444328 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.444348 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.547893 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.547957 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.547976 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.548000 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.548019 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.650923 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.650984 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.650995 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.651012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.651026 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.754169 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.754228 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.754238 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.754257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.754267 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.857978 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.858061 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.858088 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.858124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.858146 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.961966 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.962037 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.962055 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.962080 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.962098 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:11Z","lastTransitionTime":"2026-02-16T14:54:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.968510 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:22:48.391098467 +0000 UTC Feb 16 14:54:11 crc kubenswrapper[4748]: I0216 14:54:11.993464 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:11 crc kubenswrapper[4748]: E0216 14:54:11.993757 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.065700 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.065833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.065856 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.065883 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.065900 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.169124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.169187 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.169204 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.169226 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.169244 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.272994 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.273053 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.273072 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.273098 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.273119 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.295021 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.295073 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.295091 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.295121 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.295141 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.318285 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:12Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.324353 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.324441 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.324465 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.324498 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.324522 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.346765 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:12Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.352048 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.352120 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.352136 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.352162 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.352181 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.374053 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:12Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.379094 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.379158 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.379185 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.379218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.379243 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.402054 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:12Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.408226 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.408297 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.408317 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.408358 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.408381 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.431433 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:12Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.431682 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.434228 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.434283 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.434300 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.434329 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.434348 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.538428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.538514 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.538531 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.538559 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.538579 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.643124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.643195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.643219 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.643251 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.643310 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.747626 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.747709 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.747802 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.747834 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.747860 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.851796 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.851865 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.851884 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.851908 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.851928 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.955580 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.955644 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.955667 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.955702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.955771 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:12Z","lastTransitionTime":"2026-02-16T14:54:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.969626 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 15:05:35.39219874 +0000 UTC Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.993650 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.993777 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.993834 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.993930 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.994081 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:12 crc kubenswrapper[4748]: E0216 14:54:12.994821 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:12 crc kubenswrapper[4748]: I0216 14:54:12.995137 4748 scope.go:117] "RemoveContainer" containerID="cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.059046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.059114 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.059134 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.059161 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.059183 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.163765 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.163845 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.163867 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.163901 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.164036 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.267168 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.267212 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.267221 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.267239 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.267251 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.370604 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.370675 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.370692 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.370756 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.370776 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.474415 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.474480 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.474499 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.474528 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.474547 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.547293 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/2.log" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.551360 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.551951 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.577650 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.577742 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.577761 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.577784 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.577802 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.581508 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.608532 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.622496 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.637495 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.657608 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.670882 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.680452 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.680477 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.680487 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.680499 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.680516 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.684367 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.698135 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.712652 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.725513 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.743599 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.758750 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.770361 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.782961 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.783032 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.783049 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.783080 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.783097 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.787062 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.802621 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.815148 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.828696 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.841320 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:13Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.885952 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.885997 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.886006 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.886024 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.886037 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.970095 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 12:22:56.343964962 +0000 UTC Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.988614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.988676 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.988696 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.988740 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.988758 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:13Z","lastTransitionTime":"2026-02-16T14:54:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:13 crc kubenswrapper[4748]: I0216 14:54:13.994265 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:13 crc kubenswrapper[4748]: E0216 14:54:13.994461 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.092996 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.093089 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.093116 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.093156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.093184 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.196868 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.196956 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.196975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.197005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.197024 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.300122 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.300194 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.300212 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.300242 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.300263 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.403420 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.403480 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.403496 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.403522 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.403545 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.506792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.506837 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.506848 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.506866 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.506880 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.557996 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/3.log" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.559256 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/2.log" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.562858 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" exitCode=1 Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.562925 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.562978 4748 scope.go:117] "RemoveContainer" containerID="cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.563633 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 14:54:14 crc kubenswrapper[4748]: E0216 14:54:14.563991 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.584095 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.602574 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.610831 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.610888 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.610904 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.610929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.610948 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.634089 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.663449 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"vn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 14:54:13.970930 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970931 6789 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970940 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970945 6789 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970955 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970966 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970972 6789 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.680747 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.701111 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.714945 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.715001 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.715014 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.715034 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.715050 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.721247 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.739045 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.754261 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.766453 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.780609 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.795069 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.812109 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.818284 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.818358 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.818371 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.818394 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.818413 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.830958 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.848661 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.868382 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.883277 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.897676 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:14Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.921539 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.921606 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.921621 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.921650 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.921661 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:14Z","lastTransitionTime":"2026-02-16T14:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.970317 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 21:21:11.71545146 +0000 UTC Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.993880 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.993883 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:14 crc kubenswrapper[4748]: E0216 14:54:14.994040 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:14 crc kubenswrapper[4748]: I0216 14:54:14.994051 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:14 crc kubenswrapper[4748]: E0216 14:54:14.994330 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:14 crc kubenswrapper[4748]: E0216 14:54:14.994518 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.018134 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.020213 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.024312 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.024381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.024400 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.024437 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.024467 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.038213 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.055125 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.079628 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.095754 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.115141 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.126788 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.126822 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.126833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.126947 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.126962 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.138261 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb251d5dad3e6417bc7e9bb284135abdc815de81d252c3d49d3b20f4c3fa53b0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:53:41Z\\\",\\\"message\\\":\\\"removal\\\\nI0216 14:53:41.100897 6389 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0216 14:53:41.100903 6389 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0216 14:53:41.100933 6389 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0216 14:53:41.100939 6389 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0216 14:53:41.100954 6389 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0216 14:53:41.100965 6389 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0216 14:53:41.100969 6389 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0216 14:53:41.100999 6389 handler.go:208] Removed *v1.Node event handler 7\\\\nI0216 14:53:41.101008 6389 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0216 14:53:41.101015 6389 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0216 14:53:41.101022 6389 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0216 14:53:41.101031 6389 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0216 14:53:41.101037 6389 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0216 14:53:41.101043 6389 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0216 14:53:41.101049 6389 handler.go:208] Removed *v1.Node event handler 2\\\\nI0216 14:53:41.113841 6389 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"vn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 14:54:13.970930 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970931 6789 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970940 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970945 6789 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970955 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970966 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970972 6789 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.151933 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.169532 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.185481 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.205214 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.223042 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.229111 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.229179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.229198 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.229228 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.229247 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.237475 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.251186 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.263105 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.275952 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.290563 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.307146 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.331620 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.331704 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.331754 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.331783 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.331802 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.435106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.435164 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.435176 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.435203 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.435216 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.537765 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.538157 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.538288 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.538410 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.538532 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.568610 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/3.log" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.577748 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 14:54:15 crc kubenswrapper[4748]: E0216 14:54:15.577886 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.594311 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.626637 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71163778-1b1d-4855-bd87-daad5122a165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46bca6374693df66963dfed053db555018b3455f47c662346a68245715a5c9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5a1b79b7a97ad89754006e9477aba1132bdbdc85e34c90bae719ed5830306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96d10fa620a20fed7f80f566f8f7b1aab97ef7b84bdb4fc48e4b0f52d6fd25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53d8722f631552f16e54360803aa7a60d383f56d050cb9faacd0249fb1d185d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b27e93608fba4279b953fa1dcb00de496850daa23c2c3d92e739d7358fe9bb81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.645095 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.645148 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.645163 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.645191 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.645208 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.650671 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.669008 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.687890 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.705749 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.720804 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.746109 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.749046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.749163 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.749238 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.749315 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.749388 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.760099 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.777553 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.811201 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"vn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 14:54:13.970930 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970931 6789 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970940 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970945 6789 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970955 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970966 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970972 6789 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.852334 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.854424 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.854625 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.854751 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.854854 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.854938 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.870580 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.899284 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.920731 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.941173 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.956043 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.958311 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.958370 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.958383 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.958408 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.958424 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:15Z","lastTransitionTime":"2026-02-16T14:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.971200 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:16:29.745446792 +0000 UTC Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.973289 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.985105 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:15Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:15 crc kubenswrapper[4748]: I0216 14:54:15.994243 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:15 crc kubenswrapper[4748]: E0216 14:54:15.994407 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.062627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.062673 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.062683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.062700 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.062726 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.166652 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.166763 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.166785 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.166819 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.166839 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.271029 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.271124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.271155 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.271192 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.271218 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.375350 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.375447 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.375466 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.375497 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.375516 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.478948 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.479023 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.479041 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.479069 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.479090 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.581469 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.581537 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.581559 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.581584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.581604 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.685126 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.685207 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.685231 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.685262 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.685288 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.789272 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.789348 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.789367 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.789399 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.789420 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.892670 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.892777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.892801 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.892833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.892858 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.971452 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 00:08:46.857819546 +0000 UTC Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.994403 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.994433 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.994482 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:16 crc kubenswrapper[4748]: E0216 14:54:16.994903 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:16 crc kubenswrapper[4748]: E0216 14:54:16.995062 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:16 crc kubenswrapper[4748]: E0216 14:54:16.994608 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.997190 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.997257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.997278 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.997306 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:16 crc kubenswrapper[4748]: I0216 14:54:16.997332 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:16Z","lastTransitionTime":"2026-02-16T14:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.100770 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.100853 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.100872 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.100903 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.100922 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.203962 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.204047 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.204073 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.204117 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.204142 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.307461 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.307547 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.307567 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.307598 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.307618 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.410800 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.410868 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.410885 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.410910 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.410928 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.514316 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.514378 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.514398 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.514422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.514442 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.617806 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.617853 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.617866 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.617883 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.617897 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.721647 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.721799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.721828 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.721860 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.721885 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.825764 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.825839 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.825854 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.825880 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.825901 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.930085 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.930166 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.930190 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.930220 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.930245 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:17Z","lastTransitionTime":"2026-02-16T14:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.971588 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 15:10:21.6930005 +0000 UTC Feb 16 14:54:17 crc kubenswrapper[4748]: I0216 14:54:17.994319 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:17 crc kubenswrapper[4748]: E0216 14:54:17.994522 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.034549 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.034636 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.034663 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.034695 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.034762 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.138755 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.138867 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.138889 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.138917 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.138936 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.243157 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.243236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.243260 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.243294 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.243321 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.346229 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.346271 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.346283 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.346302 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.346316 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.449580 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.449613 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.449623 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.449639 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.449652 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.552264 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.552316 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.552335 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.552358 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.552376 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.654541 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.654600 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.654618 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.654642 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.654661 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.757568 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.757627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.757650 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.757676 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.757697 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.785872 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.786009 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.786094 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786120 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.786091898 +0000 UTC m=+148.477760977 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.786192 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786259 4748 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786286 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786318 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786329 4748 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786342 4748 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786352 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.786325614 +0000 UTC m=+148.477994693 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786381 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.786366965 +0000 UTC m=+148.478036034 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.786413 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.786390406 +0000 UTC m=+148.478059485 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.860851 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.860906 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.860917 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.860933 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.860944 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.887647 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.887929 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.887970 4748 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.887992 4748 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.888081 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.888055582 +0000 UTC m=+148.579724661 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.964406 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.964491 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.964509 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.964532 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.964550 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:18Z","lastTransitionTime":"2026-02-16T14:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.972148 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:48:53.818007531 +0000 UTC Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.994048 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.994189 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.994373 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:18 crc kubenswrapper[4748]: I0216 14:54:18.994417 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.994782 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:18 crc kubenswrapper[4748]: E0216 14:54:18.994640 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.068228 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.068303 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.068317 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.068341 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.068358 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.170876 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.170956 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.170983 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.171016 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.171039 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.273878 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.273946 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.273968 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.274009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.274032 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.377439 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.377521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.377545 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.377573 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.377596 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.485209 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.485681 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.485701 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.485757 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.485784 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.588410 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.588462 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.588478 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.588503 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.588523 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.692133 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.692236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.692263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.692300 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.692325 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.795707 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.795799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.795815 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.795841 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.795862 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.899190 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.899255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.899280 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.899312 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.899335 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:19Z","lastTransitionTime":"2026-02-16T14:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.972356 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 07:23:02.723345148 +0000 UTC Feb 16 14:54:19 crc kubenswrapper[4748]: I0216 14:54:19.993866 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:19 crc kubenswrapper[4748]: E0216 14:54:19.994128 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.002218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.002296 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.002316 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.002342 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.002360 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.105862 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.105934 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.105953 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.105978 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.105995 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.208778 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.208851 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.208883 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.208916 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.208954 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.312049 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.312108 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.312131 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.312153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.312170 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.414562 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.414621 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.414638 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.414662 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.414678 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.517585 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.517641 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.517658 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.517686 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.517703 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.620923 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.621005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.621027 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.621056 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.621080 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.723832 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.723917 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.723935 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.723961 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.723979 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.827446 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.827580 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.827610 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.827639 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.827657 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.931584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.931664 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.931684 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.931711 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.931756 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:20Z","lastTransitionTime":"2026-02-16T14:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.973089 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:28:24.138781395 +0000 UTC Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.993659 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.993748 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:20 crc kubenswrapper[4748]: E0216 14:54:20.993859 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:20 crc kubenswrapper[4748]: I0216 14:54:20.993936 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:20 crc kubenswrapper[4748]: E0216 14:54:20.994239 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:20 crc kubenswrapper[4748]: E0216 14:54:20.994324 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.034584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.034674 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.034697 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.034876 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.034922 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.138100 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.138198 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.138217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.138242 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.138259 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.241208 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.241268 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.241284 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.241307 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.241324 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.344906 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.344989 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.345014 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.345051 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.345075 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.448186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.448239 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.448251 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.448269 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.448282 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.551354 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.551412 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.551424 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.551465 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.551486 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.655266 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.655340 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.655357 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.655380 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.655398 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.759288 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.759351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.759368 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.759393 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.759413 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.862550 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.862620 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.862640 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.862668 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.862691 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.965899 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.966005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.966029 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.966056 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.966075 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:21Z","lastTransitionTime":"2026-02-16T14:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.973299 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:50:18.089485555 +0000 UTC Feb 16 14:54:21 crc kubenswrapper[4748]: I0216 14:54:21.993763 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:21 crc kubenswrapper[4748]: E0216 14:54:21.993951 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.068976 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.069102 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.069121 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.069144 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.069163 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.172498 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.172567 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.172586 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.172613 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.172630 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.275967 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.276035 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.276053 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.276081 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.276101 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.379351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.379418 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.379435 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.379459 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.379477 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.482699 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.482783 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.482799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.482830 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.482866 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.585183 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.585241 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.585264 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.585290 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.585310 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.688568 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.688646 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.688669 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.688697 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.688751 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.791958 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.792034 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.792052 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.792076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.792100 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.797285 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.797358 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.797378 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.797405 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.797421 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.821247 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.826954 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.827004 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.827023 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.827049 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.827068 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.852690 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.858348 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.858406 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.858429 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.858454 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.858475 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.883432 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.890003 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.890063 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.890082 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.890108 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.890128 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.912687 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.918517 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.918575 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.918594 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.918617 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.918637 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.942116 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:22Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.942331 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.944452 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.944502 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.944521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.944545 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.944565 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:22Z","lastTransitionTime":"2026-02-16T14:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.974139 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:18:19.956855323 +0000 UTC Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.993933 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.994011 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:22 crc kubenswrapper[4748]: I0216 14:54:22.994106 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.994283 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.994391 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:22 crc kubenswrapper[4748]: E0216 14:54:22.994613 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.047703 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.047829 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.047849 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.047872 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.047892 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.151377 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.151435 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.151455 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.151481 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.151500 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.254315 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.254364 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.254374 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.254391 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.254403 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.357615 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.357676 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.357692 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.357760 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.357785 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.461001 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.461076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.461095 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.461124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.461142 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.564922 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.565000 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.565025 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.565055 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.565078 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.668690 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.668818 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.668838 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.668866 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.668885 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.771888 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.771947 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.771968 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.772015 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.772041 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.875189 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.875248 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.875266 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.875293 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.875316 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.974552 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 13:49:18.408306266 +0000 UTC Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.977509 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.977568 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.977592 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.977618 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.977639 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:23Z","lastTransitionTime":"2026-02-16T14:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:23 crc kubenswrapper[4748]: I0216 14:54:23.993229 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:23 crc kubenswrapper[4748]: E0216 14:54:23.993388 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.080784 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.080850 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.080876 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.080900 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.080921 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.183800 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.183876 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.183893 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.183917 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.183938 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.287884 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.287963 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.287985 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.288017 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.288042 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.390310 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.390368 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.390379 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.390395 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.390407 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.494078 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.494138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.494159 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.494185 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.494204 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.596871 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.596911 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.596948 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.596964 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.596975 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.700507 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.700581 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.700603 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.700628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.700647 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.803816 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.803886 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.803904 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.803927 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.803970 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.907620 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.907678 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.907694 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.907739 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.907752 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:24Z","lastTransitionTime":"2026-02-16T14:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.975636 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:25:26.777840319 +0000 UTC Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.994788 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.994867 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:24 crc kubenswrapper[4748]: I0216 14:54:24.994837 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:24 crc kubenswrapper[4748]: E0216 14:54:24.995215 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:24 crc kubenswrapper[4748]: E0216 14:54:24.995994 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:24 crc kubenswrapper[4748]: E0216 14:54:24.996457 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.011183 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.011245 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.011264 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.011293 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.011313 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.018602 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.063829 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71163778-1b1d-4855-bd87-daad5122a165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46bca6374693df66963dfed053db555018b3455f47c662346a68245715a5c9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5a1b79b7a97ad89754006e9477aba1132bdbdc85e34c90bae719ed5830306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96d10fa620a20fed7f80f566f8f7b1aab97ef7b84bdb4fc48e4b0f52d6fd25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53d8722f631552f16e54360803aa7a60d383f56d050cb9faacd0249fb1d185d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b27e93608fba4279b953fa1dcb00de496850daa23c2c3d92e739d7358fe9bb81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.084314 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.099436 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.113879 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.113923 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.113934 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.113956 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.113969 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.123398 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.145093 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.169851 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.198268 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.217459 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.218397 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.218457 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.218476 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.218504 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.218524 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.239271 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.263999 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.302354 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.321472 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.321517 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.321529 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.321556 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.321575 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.338165 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.362884 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.378469 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.395603 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.425354 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.425428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.425447 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.425495 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.425513 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.428893 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"vn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 14:54:13.970930 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970931 6789 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970940 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970945 6789 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970955 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970966 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970972 6789 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.447517 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.461172 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:25Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.529318 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.529387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.529404 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.529433 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.529450 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.633232 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.633284 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.633300 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.633322 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.633338 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.736913 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.736976 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.737014 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.737049 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.737071 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.841960 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.842056 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.842074 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.842109 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.842136 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.946779 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.946891 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.946910 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.946973 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.946993 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:25Z","lastTransitionTime":"2026-02-16T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.975798 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 15:05:46.966794088 +0000 UTC Feb 16 14:54:25 crc kubenswrapper[4748]: I0216 14:54:25.994223 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:25 crc kubenswrapper[4748]: E0216 14:54:25.994431 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.050181 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.050266 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.050294 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.050328 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.050355 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.154978 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.155060 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.155080 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.155113 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.155133 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.260009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.260094 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.260111 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.260139 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.260163 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.364956 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.365054 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.365078 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.365110 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.365134 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.469084 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.469155 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.469174 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.469249 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.469269 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.573057 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.573167 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.573190 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.573257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.573281 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.676684 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.676819 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.676843 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.676871 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.676891 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.779972 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.780046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.780065 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.780097 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.780115 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.883457 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.883526 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.883547 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.883571 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.883590 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.976039 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:11:58.702250522 +0000 UTC Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.987055 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.987143 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.987169 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.987205 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.987225 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:26Z","lastTransitionTime":"2026-02-16T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.994079 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.994121 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:26 crc kubenswrapper[4748]: E0216 14:54:26.994271 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:26 crc kubenswrapper[4748]: I0216 14:54:26.994303 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:26 crc kubenswrapper[4748]: E0216 14:54:26.994387 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:26 crc kubenswrapper[4748]: E0216 14:54:26.994478 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.090288 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.090365 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.090388 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.090417 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.090442 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.194156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.194233 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.194262 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.194297 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.194322 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.298302 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.298381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.298406 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.298435 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.298458 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.402078 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.402161 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.402182 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.402215 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.402270 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.505598 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.505683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.505703 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.505775 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.505795 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.609476 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.609555 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.609573 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.609612 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.609634 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.712694 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.712774 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.712871 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.712892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.712905 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.816397 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.816450 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.816463 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.816486 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.816502 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.919607 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.919697 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.919751 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.919788 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.919813 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:27Z","lastTransitionTime":"2026-02-16T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.976947 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:49:06.167195265 +0000 UTC Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.993608 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:27 crc kubenswrapper[4748]: E0216 14:54:27.993839 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:27 crc kubenswrapper[4748]: I0216 14:54:27.995014 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 14:54:27 crc kubenswrapper[4748]: E0216 14:54:27.995283 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.023378 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.023647 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.023870 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.024066 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.024234 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.127832 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.128082 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.128198 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.128289 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.128366 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.231748 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.232233 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.232394 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.232546 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.232679 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.335926 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.335993 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.336010 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.336034 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.336052 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.439380 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.439459 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.439486 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.439524 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.439551 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.543614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.543679 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.543698 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.543762 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.543790 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.648135 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.648251 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.648276 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.648309 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.648332 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.752434 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.752503 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.752521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.752548 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.752565 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.856141 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.856269 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.856287 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.856315 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.856335 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.960964 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.961069 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.961100 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.961142 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.961170 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:28Z","lastTransitionTime":"2026-02-16T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.977382 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:44:55.748884025 +0000 UTC Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.993938 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:28 crc kubenswrapper[4748]: E0216 14:54:28.994205 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.994212 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:28 crc kubenswrapper[4748]: I0216 14:54:28.994314 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:28 crc kubenswrapper[4748]: E0216 14:54:28.994386 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:28 crc kubenswrapper[4748]: E0216 14:54:28.994565 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.064369 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.064782 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.064920 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.065081 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.065236 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.168906 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.168977 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.168998 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.169024 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.169047 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.273519 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.273628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.273652 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.273757 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.273784 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.378208 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.378312 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.378335 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.378363 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.378386 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.482579 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.482635 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.482656 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.482688 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.482707 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.587978 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.588054 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.588076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.588106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.588134 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.692657 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.692795 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.692828 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.692861 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.692884 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.796374 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.796465 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.796491 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.796526 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.796551 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.900445 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.900528 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.900547 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.900577 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.900597 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:29Z","lastTransitionTime":"2026-02-16T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.977838 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:30:54.233811945 +0000 UTC Feb 16 14:54:29 crc kubenswrapper[4748]: I0216 14:54:29.994219 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:29 crc kubenswrapper[4748]: E0216 14:54:29.994466 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.004126 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.004217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.004245 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.004334 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.004363 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.108546 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.108605 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.108623 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.108647 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.108665 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.211703 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.211790 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.211830 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.211852 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.211866 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.315450 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.315524 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.315546 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.315574 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.315599 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.419469 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.419583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.419602 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.419631 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.419652 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.524124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.524275 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.524298 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.524326 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.524347 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.627263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.627308 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.627324 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.627345 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.627362 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.730900 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.730967 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.730983 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.731007 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.731025 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.834431 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.834523 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.834549 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.834580 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.834603 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.938219 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.938454 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.938477 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.938504 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.938525 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:30Z","lastTransitionTime":"2026-02-16T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.978183 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 18:13:42.005771951 +0000 UTC Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.993560 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.993747 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:30 crc kubenswrapper[4748]: E0216 14:54:30.993816 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:30 crc kubenswrapper[4748]: I0216 14:54:30.993947 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:30 crc kubenswrapper[4748]: E0216 14:54:30.994138 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:30 crc kubenswrapper[4748]: E0216 14:54:30.994243 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.042260 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.042322 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.042348 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.042381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.042405 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.145301 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.145347 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.145359 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.145382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.145394 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.248777 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.248843 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.248863 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.248888 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.248905 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.352180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.352245 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.352263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.352293 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.352315 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.455669 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.455771 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.455797 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.455826 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.455850 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.559469 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.559530 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.559546 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.559569 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.559588 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.662468 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.662548 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.662565 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.662592 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.662610 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.765150 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.765202 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.765217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.765235 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.765249 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.868702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.868790 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.868809 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.868833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.868851 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.972422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.972491 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.972508 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.972532 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.972550 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:31Z","lastTransitionTime":"2026-02-16T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.978850 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:04:50.934690405 +0000 UTC Feb 16 14:54:31 crc kubenswrapper[4748]: I0216 14:54:31.994363 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:31 crc kubenswrapper[4748]: E0216 14:54:31.995098 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.075917 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.075998 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.076021 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.076050 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.076072 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.179393 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.179450 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.179467 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.179489 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.179508 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.282145 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.282214 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.282256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.282281 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.282300 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.385997 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.386140 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.386161 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.386188 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.386207 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.489045 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.489101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.489123 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.489150 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.489172 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.592171 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.592245 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.592265 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.592291 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.592309 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.694792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.694850 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.694866 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.694888 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.694939 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.798428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.798503 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.798521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.798548 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.798568 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.903069 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.903129 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.903152 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.903180 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.903199 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:32Z","lastTransitionTime":"2026-02-16T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.979615 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 07:25:45.315244881 +0000 UTC Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.994073 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.994132 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:32 crc kubenswrapper[4748]: E0216 14:54:32.994295 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:32 crc kubenswrapper[4748]: I0216 14:54:32.994341 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:32 crc kubenswrapper[4748]: E0216 14:54:32.994583 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:32 crc kubenswrapper[4748]: E0216 14:54:32.994752 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.006649 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.006754 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.006773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.006799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.006819 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.008186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.008247 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.008270 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.008295 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.008315 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.029953 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.039832 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.040008 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.040075 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.041525 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.041550 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.063570 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.069128 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.069186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.069207 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.069236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.069257 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.088760 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.093666 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.093702 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.093727 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.093743 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.093753 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.112682 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.117130 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.117175 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.117185 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.117201 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.117211 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.136349 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:33Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.136518 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.138564 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.138645 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.138665 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.138692 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.138754 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.242041 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.242115 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.242140 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.242170 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.242189 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.345512 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.345572 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.345591 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.345616 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.345639 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.448609 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.448675 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.448748 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.448792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.448816 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.551899 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.551986 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.552009 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.552035 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.552056 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.654131 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.654197 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.654214 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.654239 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.654259 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.757266 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.757324 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.757342 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.757368 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.757385 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.761876 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.762103 4748 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.762185 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs podName:078f98ca-d871-47a5-96c3-1e818312c4c4 nodeName:}" failed. No retries permitted until 2026-02-16 14:55:37.762164917 +0000 UTC m=+163.453833956 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs") pod "network-metrics-daemon-lll47" (UID: "078f98ca-d871-47a5-96c3-1e818312c4c4") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.861555 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.861624 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.861643 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.861669 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.861690 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.970096 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.970163 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.970188 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.970218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.970241 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:33Z","lastTransitionTime":"2026-02-16T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.979868 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 05:57:54.657757073 +0000 UTC Feb 16 14:54:33 crc kubenswrapper[4748]: I0216 14:54:33.994306 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:33 crc kubenswrapper[4748]: E0216 14:54:33.995164 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.073278 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.073328 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.073347 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.073368 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.073383 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.176137 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.176213 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.176233 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.176259 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.176278 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.279037 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.279088 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.279101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.279118 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.279130 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.382387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.382446 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.382462 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.382485 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.382502 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.486156 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.486232 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.486255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.486289 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.486311 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.590317 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.590387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.590414 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.590440 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.590460 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.693439 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.693524 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.693549 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.693581 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.693604 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.797246 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.797308 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.797329 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.797357 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.797379 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.901526 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.901912 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.902039 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.902217 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.902340 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:34Z","lastTransitionTime":"2026-02-16T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.981794 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 16:54:08.319259654 +0000 UTC Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.994164 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.994234 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:34 crc kubenswrapper[4748]: E0216 14:54:34.994333 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:34 crc kubenswrapper[4748]: I0216 14:54:34.994354 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:34 crc kubenswrapper[4748]: E0216 14:54:34.994500 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:34 crc kubenswrapper[4748]: E0216 14:54:34.994634 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.006391 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.006455 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.006476 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.006502 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.006526 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.014005 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.057155 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71163778-1b1d-4855-bd87-daad5122a165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46bca6374693df66963dfed053db555018b3455f47c662346a68245715a5c9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5a1b79b7a97ad89754006e9477aba1132bdbdc85e34c90bae719ed5830306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96d10fa620a20fed7f80f566f8f7b1aab97ef7b84bdb4fc48e4b0f52d6fd25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53d8722f631552f16e54360803aa7a60d383f56d050cb9faacd0249fb1d185d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b27e93608fba4279b953fa1dcb00de496850daa23c2c3d92e739d7358fe9bb81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.077672 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.098353 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.109392 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.109433 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.109448 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.109470 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.109489 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.120256 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e1b6852b-725e-4dd3-83db-5130cbbc77d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ae6621dcb3f794f6f83efd8db5ecdbe0eebac916ef04e0e247a30f9754e5fb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6544d4a2b2d843823a85b747e862d64fe0b6857604f7feb149232d8a900be290\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c6bdc68c66948385f82b6a6d525087430aa1f0c1baa9a864caa5b4ce04fe82a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.141051 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3b2f8e2-460d-46c5-a7fb-6483b0d6fa88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d47ea7d93bc4558517cfd8ddee45c899b74db722246f45be9c934f4c531fa7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0213993b84db343a79e47548823a43cc41141a4a6bb9d5ea21508b8eed8cf7fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://eae42985a9682ee6ecaf295c8ef8050db5476ed4a1db4215aef144f7d13bab42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ea047961dc2d95177e16d5eb0293093a418ebeb249203d7d743f18d596520b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.164827 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.194502 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"81c4b8fe-3db6-4720-81d0-a1d2d33470bb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25bdea8530d29bc81730c574b92b9ac9fc427f5382d391ab9939329a775d9369\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bc6d2a1c952d3b2db4a90cc1bda67f84500b5b85bc84b0f73e8a0c441083393\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0220ce203bdce3c2e3f80b4a429f3499b5ac6ac4a2a5135518202b4160299ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ff20a87bc48b5a02bcfd7f5b2e5ac9dede713de8970b9c58ea01d5eea1de7f8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64c3e76424eeedd669685ac65a45c8505126e3ec96ee126ef8de152cddf87798\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de27ef46a672e8b0fd15e8d69bf2a6f05f7fbc42088977d7bbeef577fd8dad66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cdb65243f0b4295a0741ca18d9f2ad05cb9962aa8ea6199669ba7121fcbf0c97\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wdsgq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-gt5ps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.212289 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c9cbbc92-8258-496d-b183-2321860c64cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://244777b337048550f49c39180eb2e7e49a23055501b73ccc50a5563e0c9ea6a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://74949e78b955bee203d578271b77ef5ed8c1d2eef4dd9bb2be816a65af35d4d1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnr54\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:28Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-mv966\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.213385 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.213426 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.213444 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.213468 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.213485 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.228391 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lll47" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"078f98ca-d871-47a5-96c3-1e818312c4c4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:29Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ntt99\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:29Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lll47\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.253592 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://777e2babbc88c074ca1b408b6e5b29ca73373207df94b09886f2d5316984cc31\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.276393 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.301124 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e489b10b9baaa53f6018e0e8d573baf415c254b37e697662c21f00017fe8bffd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88e9d84aad089d1f5826b618328ed1c637e0886f2318774f1549caa041aa15f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.317069 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.317112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.317129 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.317153 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.317173 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.321662 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.337895 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xbqqk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1624cecb-4def-4246-9cd2-b9a6f4e5920c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9613811565ad2d5cf9e67c32145d79a27f93675ff1d63311cdd84198f0b81b68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h96pk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:14Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xbqqk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.360876 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-dw679" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1724aef8-25e0-40aa-86be-2ca7849960f1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:03Z\\\",\\\"message\\\":\\\"2026-02-16T14:53:17+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0\\\\n2026-02-16T14:53:17+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_5006278a-4a7c-4ef7-995e-ace8dcd2eec0 to /host/opt/cni/bin/\\\\n2026-02-16T14:53:18Z [verbose] multus-daemon started\\\\n2026-02-16T14:53:18Z [verbose] Readiness Indicator file check\\\\n2026-02-16T14:54:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:54:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6q57v\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-multus\"/\"multus-dw679\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.394256 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-16T14:54:13Z\\\",\\\"message\\\":\\\"vn.org/owner\\\\\\\":\\\\\\\"openshift-service-ca-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.40\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0216 14:54:13.970930 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970931 6789 services_controller.go:452] Built service openshift-service-ca-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970940 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-image-registry/node-ca-zkqs9\\\\nI0216 14:54:13.970945 6789 services_controller.go:453] Built service openshift-service-ca-operator/metrics template LB for network=default: []services.LB{}\\\\nI0216 14:54:13.970955 6789 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970966 6789 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0216 14:54:13.970972 6789 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:54:13Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-67j69\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-r662f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.413530 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fafb0b41-fe7a-4d57-a714-4666580d6ae6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b78b784ac85b85bc075d4d51a25aafc78b39a086b7abfc122c260d4f05c86f12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-d8vp7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:16Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-p7ttg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.419819 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.419900 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.419914 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.419934 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.419951 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.431049 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-zkqs9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e2941c0-a633-40af-902c-1304d8df18b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cbcbad82f131841d02d045ddd5a76836a6f7ef0488d7ca5e227e73f15b67d2dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lg4kc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:53:17Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-zkqs9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:35Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.523340 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.523473 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.523496 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.523532 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.523569 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.627436 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.627499 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.627517 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.627543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.627561 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.731187 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.731268 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.731279 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.731304 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.731324 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.834081 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.834127 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.834140 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.834162 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.834176 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.938678 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.938781 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.938800 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.938833 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.938853 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:35Z","lastTransitionTime":"2026-02-16T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.983474 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 20:59:04.264513811 +0000 UTC Feb 16 14:54:35 crc kubenswrapper[4748]: I0216 14:54:35.994127 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:35 crc kubenswrapper[4748]: E0216 14:54:35.994265 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.041536 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.041860 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.041894 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.041929 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.041956 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.144756 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.144835 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.144853 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.144878 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.144899 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.249214 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.249289 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.249308 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.249381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.249401 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.352890 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.352947 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.352967 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.352991 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.353011 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.456533 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.456633 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.456673 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.456710 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.456784 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.559798 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.559874 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.559892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.559918 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.559937 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.662543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.662636 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.662658 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.662681 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.662698 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.765646 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.765746 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.765767 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.765792 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.765811 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.870046 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.870350 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.870362 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.870382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.870397 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.973509 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.973574 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.973590 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.973616 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.973643 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:36Z","lastTransitionTime":"2026-02-16T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.983998 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:43:32.989030976 +0000 UTC Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.993444 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.993533 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:36 crc kubenswrapper[4748]: I0216 14:54:36.993782 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:36 crc kubenswrapper[4748]: E0216 14:54:36.993879 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:36 crc kubenswrapper[4748]: E0216 14:54:36.994102 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:36 crc kubenswrapper[4748]: E0216 14:54:36.994185 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.076913 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.076974 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.076990 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.077012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.077032 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.179797 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.179874 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.179894 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.179920 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.179938 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.283012 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.283083 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.283100 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.283127 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.283147 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.387050 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.387125 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.387146 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.387171 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.387191 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.490528 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.490603 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.490625 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.490652 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.490671 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.593423 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.593495 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.593513 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.593538 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.593558 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.696406 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.696466 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.696484 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.696507 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.696524 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.799671 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.799754 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.799772 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.799793 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.799810 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.902524 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.902584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.902608 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.902636 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.902657 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:37Z","lastTransitionTime":"2026-02-16T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.984917 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:28:28.578601288 +0000 UTC Feb 16 14:54:37 crc kubenswrapper[4748]: I0216 14:54:37.994261 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:37 crc kubenswrapper[4748]: E0216 14:54:37.994452 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.005868 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.005921 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.005940 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.005959 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.005976 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.108951 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.109025 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.109044 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.109072 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.109105 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.212001 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.212076 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.212138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.212191 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.212211 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.314441 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.314484 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.314496 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.314513 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.314526 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.418255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.418327 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.418345 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.418376 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.418397 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.521499 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.521565 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.521583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.521607 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.521626 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.624876 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.624946 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.624963 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.624987 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.625004 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.728197 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.728259 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.728272 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.728289 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.728301 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.830786 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.830814 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.830823 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.830836 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.830844 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.934202 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.934290 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.934305 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.934369 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.934392 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:38Z","lastTransitionTime":"2026-02-16T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.986016 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 20:40:17.108765911 +0000 UTC Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.993685 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.993777 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:38 crc kubenswrapper[4748]: I0216 14:54:38.993836 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:38 crc kubenswrapper[4748]: E0216 14:54:38.993984 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:38 crc kubenswrapper[4748]: E0216 14:54:38.994134 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:38 crc kubenswrapper[4748]: E0216 14:54:38.994239 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.037179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.037241 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.037258 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.037283 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.037301 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.141005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.141081 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.141099 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.141130 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.141150 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.244423 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.244502 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.244527 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.244558 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.244583 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.348422 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.348514 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.348536 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.348567 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.348588 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.452476 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.452576 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.452594 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.452623 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.452642 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.556202 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.556281 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.556305 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.556334 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.556358 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.659051 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.659122 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.659146 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.659175 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.659200 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.762380 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.762438 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.762452 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.762472 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.762486 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.865988 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.866045 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.866062 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.866086 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.866105 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.968685 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.968779 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.968797 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.968826 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.968844 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:39Z","lastTransitionTime":"2026-02-16T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.986224 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 06:10:15.296775179 +0000 UTC Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.993661 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:39 crc kubenswrapper[4748]: E0216 14:54:39.993937 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:39 crc kubenswrapper[4748]: I0216 14:54:39.995088 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 14:54:39 crc kubenswrapper[4748]: E0216 14:54:39.995507 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.071971 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.072062 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.072085 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.072106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.072118 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.175112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.175228 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.175244 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.175265 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.175281 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.278387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.278525 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.278551 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.278579 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.278604 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.382150 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.382232 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.382256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.382285 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.382304 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.485434 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.485511 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.485529 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.485556 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.485574 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.588360 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.588402 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.588413 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.588428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.588438 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.690896 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.690952 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.690974 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.691002 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.691024 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.793387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.793463 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.793486 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.793514 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.793536 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.897173 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.897224 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.897243 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.897269 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.897288 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:40Z","lastTransitionTime":"2026-02-16T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.987490 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:19:11.3555027 +0000 UTC Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.993980 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.994091 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:40 crc kubenswrapper[4748]: I0216 14:54:40.994091 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:40 crc kubenswrapper[4748]: E0216 14:54:40.994256 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:40 crc kubenswrapper[4748]: E0216 14:54:40.994349 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:40 crc kubenswrapper[4748]: E0216 14:54:40.994552 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.000561 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.000613 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.000631 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.000655 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.000674 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.103564 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.103620 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.103638 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.103661 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.103679 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.207384 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.207445 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.207459 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.207475 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.207485 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.310104 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.310181 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.310203 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.310230 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.310256 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.413074 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.413145 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.413165 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.413190 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.413208 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.516147 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.516214 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.516233 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.516257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.516278 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.620289 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.620414 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.620444 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.620487 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.620531 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.724038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.724103 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.724120 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.724143 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.724162 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.826612 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.826653 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.826662 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.826678 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.826690 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.930037 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.930093 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.930109 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.930132 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.930152 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:41Z","lastTransitionTime":"2026-02-16T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.987925 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:55:52.191347315 +0000 UTC Feb 16 14:54:41 crc kubenswrapper[4748]: I0216 14:54:41.994364 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:41 crc kubenswrapper[4748]: E0216 14:54:41.994861 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.032927 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.032987 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.033004 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.033028 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.033046 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.135857 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.135934 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.135952 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.135975 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.135993 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.239645 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.239744 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.239781 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.239810 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.239827 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.342907 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.342965 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.342983 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.343006 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.343025 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.446083 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.446137 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.446171 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.446195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.446214 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.549663 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.549778 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.549812 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.549836 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.549853 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.653124 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.653207 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.653232 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.653257 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.653280 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.757314 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.757390 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.757417 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.757452 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.757475 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.860506 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.860573 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.860583 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.860599 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.860610 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.964339 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.964427 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.964453 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.964488 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.964511 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:42Z","lastTransitionTime":"2026-02-16T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.988451 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 16:09:23.367992046 +0000 UTC Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.993950 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.994001 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:42 crc kubenswrapper[4748]: E0216 14:54:42.994223 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:42 crc kubenswrapper[4748]: I0216 14:54:42.994333 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:42 crc kubenswrapper[4748]: E0216 14:54:42.994580 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:42 crc kubenswrapper[4748]: E0216 14:54:42.994803 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.068045 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.068112 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.068140 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.068172 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.068196 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.171600 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.171672 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.171696 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.171770 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.171800 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.275108 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.275195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.275215 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.275247 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.275340 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.341339 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.341428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.341453 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.341484 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.341507 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: E0216 14:54:43.363585 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.370319 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.370382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.370400 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.370427 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.370445 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: E0216 14:54:43.392696 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.399800 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.399903 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.399932 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.399971 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.400010 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: E0216 14:54:43.423680 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.429682 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.429763 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.429779 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.429801 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.429819 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: E0216 14:54:43.449105 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.454997 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.455089 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.455111 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.455142 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.455164 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: E0216 14:54:43.478897 4748 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T14:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d233da3b-0bcf-41f1-88d1-a438f140df6f\\\",\\\"systemUUID\\\":\\\"657f6a80-f47d-43a3-b297-9137ed51b75e\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:43Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:43 crc kubenswrapper[4748]: E0216 14:54:43.479128 4748 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.481549 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.481607 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.481628 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.481655 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.481674 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.585057 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.585127 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.585144 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.585170 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.585189 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.687941 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.687998 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.688014 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.688038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.688054 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.790893 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.790969 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.790987 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.791013 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.791032 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.894309 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.894379 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.894399 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.894429 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.894451 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.989333 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 20:09:43.036906728 +0000 UTC Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.993867 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:43 crc kubenswrapper[4748]: E0216 14:54:43.994344 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.997387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.997428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.997443 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.997465 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:43 crc kubenswrapper[4748]: I0216 14:54:43.997479 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:43Z","lastTransitionTime":"2026-02-16T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.101335 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.101421 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.101440 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.101468 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.101489 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.205768 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.205861 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.205880 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.205917 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.205939 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.308468 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.308544 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.308564 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.308598 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.308620 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.412334 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.412415 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.412439 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.412475 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.412500 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.516387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.516466 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.516485 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.516519 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.516538 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.620757 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.620845 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.620868 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.620900 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.620928 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.725521 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.725627 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.725653 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.725683 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.725703 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.829571 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.829639 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.829654 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.829678 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.829693 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.933178 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.933299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.933311 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.933329 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.933341 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:44Z","lastTransitionTime":"2026-02-16T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.990495 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 20:21:32.799335315 +0000 UTC Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.994022 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.994182 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:44 crc kubenswrapper[4748]: I0216 14:54:44.994571 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:44 crc kubenswrapper[4748]: E0216 14:54:44.994557 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:44 crc kubenswrapper[4748]: E0216 14:54:44.994978 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:44 crc kubenswrapper[4748]: E0216 14:54:44.995060 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.012238 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4be2f0c9-7076-4f1d-a23a-9a45a2e152a9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c3330cdb4cf44c23d95fb99105c306dd1e201810475cf7327e45ad900270937\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5a623a73426e365cb6f42d5511d36ee2d0d73b754f0c678b4a9b10132795ebe1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.037086 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.037145 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.037166 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.037194 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.037215 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.045363 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71163778-1b1d-4855-bd87-daad5122a165\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46bca6374693df66963dfed053db555018b3455f47c662346a68245715a5c9db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ee5a1b79b7a97ad89754006e9477aba1132bdbdc85e34c90bae719ed5830306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c96d10fa620a20fed7f80f566f8f7b1aab97ef7b84bdb4fc48e4b0f52d6fd25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d53d8722f631552f16e54360803aa7a60d383f56d050cb9faacd0249fb1d185d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b27e93608fba4279b953fa1dcb00de496850daa23c2c3d92e739d7358fe9bb81\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://79b33f41f0952ebcac35f402a54cdafcdfc2eb6ebd5f4c70bcf8d03253311e6b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4cb3401c7595a8a5517f0e580f69cb9ec48c9f27e1624a8b0a3cc692bdbb0ec\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f151a6af717eb0e157ce7f29c911fbb09daf8d01c1f84ce80c1ea45b89c8a37d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.071677 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T14:52:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T14:53:14Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0216 14:53:08.618806 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0216 14:53:08.620846 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3874099969/tls.crt::/tmp/serving-cert-3874099969/tls.key\\\\\\\"\\\\nI0216 14:53:14.437887 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 14:53:14.447579 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 14:53:14.447608 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 14:53:14.447629 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 14:53:14.447636 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 14:53:14.455587 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0216 14:53:14.455635 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455644 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 14:53:14.455649 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 14:53:14.455654 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 14:53:14.455658 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 14:53:14.455663 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0216 14:53:14.455996 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0216 14:53:14.459763 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:52:58Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T14:52:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T14:52:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T14:52:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.092061 4748 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T14:53:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://13eb6727a61e1e5b2deb3d6935a3336855e082f49f847ebb304c1fb3db81f606\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T14:53:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-16T14:54:45Z is after 2025-08-24T17:21:41Z" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.137666 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.137223312 podStartE2EDuration="1m28.137223312s" podCreationTimestamp="2026-02-16 14:53:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.135589141 +0000 UTC m=+110.827258190" watchObservedRunningTime="2026-02-16 14:54:45.137223312 +0000 UTC m=+110.828892391" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.144305 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.144371 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.144389 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.144420 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.144443 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.160926 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=55.16089802 podStartE2EDuration="55.16089802s" podCreationTimestamp="2026-02-16 14:53:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.16088775 +0000 UTC m=+110.852556799" watchObservedRunningTime="2026-02-16 14:54:45.16089802 +0000 UTC m=+110.852567099" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.224359 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-gt5ps" podStartSLOduration=90.224324821 podStartE2EDuration="1m30.224324821s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.203825954 +0000 UTC m=+110.895495063" watchObservedRunningTime="2026-02-16 14:54:45.224324821 +0000 UTC m=+110.915993870" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.244386 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-dw679" podStartSLOduration=90.244366997 podStartE2EDuration="1m30.244366997s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.244243834 +0000 UTC m=+110.935912913" watchObservedRunningTime="2026-02-16 14:54:45.244366997 +0000 UTC m=+110.936036036" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.244519 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-mv966" podStartSLOduration=89.244514931 podStartE2EDuration="1m29.244514931s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.226137977 +0000 UTC m=+110.917807026" watchObservedRunningTime="2026-02-16 14:54:45.244514931 +0000 UTC m=+110.936183970" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.247209 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.247248 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.247256 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.247273 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.247284 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.350176 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.350235 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.350249 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.350272 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.350285 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.380029 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xbqqk" podStartSLOduration=91.380003891 podStartE2EDuration="1m31.380003891s" podCreationTimestamp="2026-02-16 14:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.379534569 +0000 UTC m=+111.071203648" watchObservedRunningTime="2026-02-16 14:54:45.380003891 +0000 UTC m=+111.071672940" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.395562 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podStartSLOduration=90.395541413 podStartE2EDuration="1m30.395541413s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.39542534 +0000 UTC m=+111.087094399" watchObservedRunningTime="2026-02-16 14:54:45.395541413 +0000 UTC m=+111.087210472" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.410098 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-zkqs9" podStartSLOduration=90.41007437 podStartE2EDuration="1m30.41007437s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:45.409474885 +0000 UTC m=+111.101143934" watchObservedRunningTime="2026-02-16 14:54:45.41007437 +0000 UTC m=+111.101743429" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.452886 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.452963 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.452983 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.453004 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.453018 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.555938 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.556010 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.556034 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.556104 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.556132 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.659127 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.659176 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.659186 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.659205 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.659216 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.762141 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.762201 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.762216 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.762238 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.762253 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.865179 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.865229 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.865243 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.865263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.865277 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.968398 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.968465 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.968482 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.968507 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.968528 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:45Z","lastTransitionTime":"2026-02-16T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.990992 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:47:41.0303378 +0000 UTC Feb 16 14:54:45 crc kubenswrapper[4748]: I0216 14:54:45.995410 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:45 crc kubenswrapper[4748]: E0216 14:54:45.995882 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.072517 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.072593 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.072615 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.072642 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.072660 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.175983 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.176036 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.176054 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.176082 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.176104 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.278624 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.278696 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.278754 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.278789 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.278813 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.381762 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.381827 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.381847 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.381879 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.381897 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.485226 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.485299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.485323 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.485352 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.485377 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.588886 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.588942 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.588953 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.588974 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.588986 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.692830 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.692896 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.692914 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.692937 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.692954 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.796542 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.796598 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.796614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.796636 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.796650 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.899510 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.899555 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.899568 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.899584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.899597 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:46Z","lastTransitionTime":"2026-02-16T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.991574 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:56:22.59782368 +0000 UTC Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.993939 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.994091 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:46 crc kubenswrapper[4748]: E0216 14:54:46.994334 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:46 crc kubenswrapper[4748]: I0216 14:54:46.994423 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:46 crc kubenswrapper[4748]: E0216 14:54:46.994893 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:46 crc kubenswrapper[4748]: E0216 14:54:46.995079 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.002218 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.002277 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.002299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.002330 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.002349 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.105152 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.105203 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.105219 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.105241 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.105257 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.208299 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.208367 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.208385 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.208408 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.208425 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.311965 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.312105 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.312132 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.312159 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.312178 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.415226 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.415266 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.415298 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.415320 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.415331 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.518635 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.518687 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.518700 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.518738 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.518750 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.622236 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.622324 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.622341 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.622364 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.622381 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.724854 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.725002 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.725030 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.725061 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.725083 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.827788 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.827873 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.827891 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.827915 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.827934 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.931629 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.931746 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.931765 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.931790 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.931807 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:47Z","lastTransitionTime":"2026-02-16T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.992776 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 11:37:12.165621767 +0000 UTC Feb 16 14:54:47 crc kubenswrapper[4748]: I0216 14:54:47.994181 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:47 crc kubenswrapper[4748]: E0216 14:54:47.994502 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.034958 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.035039 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.035057 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.035085 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.035104 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.138291 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.138352 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.138370 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.138399 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.138416 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.241527 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.241612 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.241622 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.241663 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.241679 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.344005 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.344048 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.344057 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.344072 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.344083 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.446806 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.446995 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.447071 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.447100 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.447161 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.551867 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.551934 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.551951 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.551974 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.551990 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.655219 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.655319 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.655333 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.655353 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.655369 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.758869 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.758919 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.758934 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.758953 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.758967 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.862133 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.862178 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.862188 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.862205 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.862218 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.965519 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.965595 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.965614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.965638 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.965656 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:48Z","lastTransitionTime":"2026-02-16T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.993922 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:11:09.265310005 +0000 UTC Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.994062 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.994114 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:48 crc kubenswrapper[4748]: I0216 14:54:48.994123 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:48 crc kubenswrapper[4748]: E0216 14:54:48.994258 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:48 crc kubenswrapper[4748]: E0216 14:54:48.994387 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:48 crc kubenswrapper[4748]: E0216 14:54:48.994510 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.069149 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.069237 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.069263 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.069300 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.069326 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.173252 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.173330 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.173355 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.173385 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.173410 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.277134 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.277195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.277206 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.277225 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.277238 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.380822 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.380905 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.380930 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.380961 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.380983 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.484220 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.484291 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.484310 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.484336 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.484356 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.587467 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.587529 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.587543 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.587564 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.587580 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.690187 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.690290 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.690317 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.690351 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.690376 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.718626 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/1.log" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.719299 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/0.log" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.719386 4748 generic.go:334] "Generic (PLEG): container finished" podID="1724aef8-25e0-40aa-86be-2ca7849960f1" containerID="0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659" exitCode=1 Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.719438 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerDied","Data":"0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.719485 4748 scope.go:117] "RemoveContainer" containerID="e53164832a943576c244400b006abed61db6e09cc05f1a99361e159a7164effa" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.720339 4748 scope.go:117] "RemoveContainer" containerID="0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659" Feb 16 14:54:49 crc kubenswrapper[4748]: E0216 14:54:49.720683 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-dw679_openshift-multus(1724aef8-25e0-40aa-86be-2ca7849960f1)\"" pod="openshift-multus/multus-dw679" podUID="1724aef8-25e0-40aa-86be-2ca7849960f1" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.783354 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=94.783324639 podStartE2EDuration="1m34.783324639s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:49.763528829 +0000 UTC m=+115.455197938" watchObservedRunningTime="2026-02-16 14:54:49.783324639 +0000 UTC m=+115.474993708" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.794460 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.794514 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.794526 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.794545 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.794564 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.802287 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=48.802250486 podStartE2EDuration="48.802250486s" podCreationTimestamp="2026-02-16 14:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:49.799846846 +0000 UTC m=+115.491515895" watchObservedRunningTime="2026-02-16 14:54:49.802250486 +0000 UTC m=+115.493919555" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.833595 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=34.833573517 podStartE2EDuration="34.833573517s" podCreationTimestamp="2026-02-16 14:54:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:49.829596077 +0000 UTC m=+115.521265116" watchObservedRunningTime="2026-02-16 14:54:49.833573517 +0000 UTC m=+115.525242556" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.898145 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.898193 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.898211 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.898235 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.898254 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:49Z","lastTransitionTime":"2026-02-16T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.993646 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:49 crc kubenswrapper[4748]: E0216 14:54:49.993890 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:49 crc kubenswrapper[4748]: I0216 14:54:49.994200 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:07:03.339500682 +0000 UTC Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.001407 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.001479 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.001528 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.001560 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.001622 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.105304 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.105353 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.105365 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.105384 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.105397 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.209147 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.209215 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.209232 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.209255 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.209272 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.312931 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.313004 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.313016 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.313033 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.313046 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.416805 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.416880 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.416901 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.416950 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.416975 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.520832 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.520912 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.520930 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.520955 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.520973 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.624488 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.624617 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.624635 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.624657 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.624672 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.726211 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/1.log" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.727617 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.727676 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.727697 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.727752 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.727774 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.834594 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.834665 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.834685 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.834744 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.834766 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.938750 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.938814 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.938831 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.938860 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.938880 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:50Z","lastTransitionTime":"2026-02-16T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.993991 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.993991 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.994305 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:50 crc kubenswrapper[4748]: E0216 14:54:50.994381 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.994343 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:06:05.833763778 +0000 UTC Feb 16 14:54:50 crc kubenswrapper[4748]: E0216 14:54:50.994654 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:50 crc kubenswrapper[4748]: E0216 14:54:50.995399 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:50 crc kubenswrapper[4748]: I0216 14:54:50.995771 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 14:54:50 crc kubenswrapper[4748]: E0216 14:54:50.996003 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-r662f_openshift-ovn-kubernetes(2f88ea54-3399-4d84-bc96-5b7d9575bbf5)\"" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.041352 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.041425 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.041434 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.041471 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.041486 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.145101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.145192 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.145214 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.145244 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.145266 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.249115 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.249195 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.249212 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.249238 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.249255 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.351967 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.352028 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.352040 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.352118 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.352137 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.454913 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.454949 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.454982 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.455001 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.455013 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.558011 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.558088 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.558100 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.558123 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.558139 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.661834 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.661892 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.661905 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.661927 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.661941 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.765275 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.765358 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.765382 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.765419 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.765447 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.868490 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.868566 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.868584 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.868614 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.868636 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.972851 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.972918 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.972940 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.972973 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.972997 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:51Z","lastTransitionTime":"2026-02-16T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.993449 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:51 crc kubenswrapper[4748]: E0216 14:54:51.993699 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:51 crc kubenswrapper[4748]: I0216 14:54:51.995477 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 06:30:29.046525023 +0000 UTC Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.076335 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.076374 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.076387 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.076405 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.076419 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.179937 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.179997 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.180015 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.180043 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.180063 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.283857 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.283909 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.283928 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.283954 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.283974 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.386679 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.386738 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.386752 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.386773 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.386786 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.489444 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.489531 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.489556 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.489590 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.489612 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.592948 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.593020 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.593038 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.593064 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.593091 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.696960 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.697052 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.697078 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.697106 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.697127 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.800441 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.800477 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.800493 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.800516 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.800528 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.908446 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.908533 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.908551 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.908611 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.908632 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:52Z","lastTransitionTime":"2026-02-16T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.994470 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.994614 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:52 crc kubenswrapper[4748]: E0216 14:54:52.994787 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:52 crc kubenswrapper[4748]: E0216 14:54:52.994896 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.995448 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:52 crc kubenswrapper[4748]: E0216 14:54:52.995766 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:52 crc kubenswrapper[4748]: I0216 14:54:52.995933 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:18:46.731203924 +0000 UTC Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.012327 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.012381 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.012398 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.012421 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.012442 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.115864 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.115917 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.115935 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.115960 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.115978 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.219747 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.219799 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.219815 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.219840 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.219857 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.323820 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.323899 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.323920 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.323962 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.324004 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.427793 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.427942 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.428043 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.428079 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.428107 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.530978 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.531054 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.531073 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.531104 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.531129 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.635273 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.635367 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.635389 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.635428 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.635449 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.738441 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.738487 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.738501 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.738523 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.738537 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.825027 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.825101 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.825113 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.825138 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.825154 4748 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T14:54:53Z","lastTransitionTime":"2026-02-16T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.887035 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t"] Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.887731 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.890828 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.891145 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.891688 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.893323 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.994276 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:53 crc kubenswrapper[4748]: E0216 14:54:53.994590 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.997290 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 20:34:46.761793989 +0000 UTC Feb 16 14:54:53 crc kubenswrapper[4748]: I0216 14:54:53.997376 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.006932 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.007017 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.007093 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.007164 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.007227 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.008857 4748 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.108363 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.108486 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.108562 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.108677 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.108765 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.108790 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.108869 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.110041 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-service-ca\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.120122 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.136190 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c79662c4-fadf-4b9e-aa96-cc649d9c97d1-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-hlv7t\" (UID: \"c79662c4-fadf-4b9e-aa96-cc649d9c97d1\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.214590 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.745351 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" event={"ID":"c79662c4-fadf-4b9e-aa96-cc649d9c97d1","Type":"ContainerStarted","Data":"beab9ce0d0018d2cf3349f0d5b5605628e3f1ce476314d40c075f993b122c87c"} Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.745428 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" event={"ID":"c79662c4-fadf-4b9e-aa96-cc649d9c97d1","Type":"ContainerStarted","Data":"50ec5c806325f46bfab8b45f30f379544b3ddb094543f818f548786710e5ba69"} Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.770466 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-hlv7t" podStartSLOduration=99.77044536 podStartE2EDuration="1m39.77044536s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:54:54.769009004 +0000 UTC m=+120.460678053" watchObservedRunningTime="2026-02-16 14:54:54.77044536 +0000 UTC m=+120.462114409" Feb 16 14:54:54 crc kubenswrapper[4748]: E0216 14:54:54.985587 4748 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.994081 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.994144 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:54 crc kubenswrapper[4748]: I0216 14:54:54.994939 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:54 crc kubenswrapper[4748]: E0216 14:54:54.994929 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:54 crc kubenswrapper[4748]: E0216 14:54:54.995085 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:54 crc kubenswrapper[4748]: E0216 14:54:54.995252 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:55 crc kubenswrapper[4748]: E0216 14:54:55.113049 4748 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 14:54:55 crc kubenswrapper[4748]: I0216 14:54:55.994393 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:55 crc kubenswrapper[4748]: E0216 14:54:55.994595 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:56 crc kubenswrapper[4748]: I0216 14:54:56.993343 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:56 crc kubenswrapper[4748]: I0216 14:54:56.993475 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:56 crc kubenswrapper[4748]: I0216 14:54:56.993375 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:56 crc kubenswrapper[4748]: E0216 14:54:56.993579 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:56 crc kubenswrapper[4748]: E0216 14:54:56.993679 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:56 crc kubenswrapper[4748]: E0216 14:54:56.993881 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:57 crc kubenswrapper[4748]: I0216 14:54:57.993687 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:57 crc kubenswrapper[4748]: E0216 14:54:57.993955 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:54:58 crc kubenswrapper[4748]: I0216 14:54:58.993329 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:54:58 crc kubenswrapper[4748]: I0216 14:54:58.993329 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:54:58 crc kubenswrapper[4748]: E0216 14:54:58.993580 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:54:58 crc kubenswrapper[4748]: I0216 14:54:58.993672 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:54:58 crc kubenswrapper[4748]: E0216 14:54:58.993831 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:54:58 crc kubenswrapper[4748]: E0216 14:54:58.993935 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:54:59 crc kubenswrapper[4748]: I0216 14:54:59.993345 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:54:59 crc kubenswrapper[4748]: E0216 14:54:59.993598 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:55:00 crc kubenswrapper[4748]: E0216 14:55:00.115005 4748 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 14:55:00 crc kubenswrapper[4748]: I0216 14:55:00.993561 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:00 crc kubenswrapper[4748]: I0216 14:55:00.993645 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:00 crc kubenswrapper[4748]: E0216 14:55:00.993866 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:00 crc kubenswrapper[4748]: I0216 14:55:00.993909 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:00 crc kubenswrapper[4748]: E0216 14:55:00.994091 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:00 crc kubenswrapper[4748]: E0216 14:55:00.994439 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:01 crc kubenswrapper[4748]: I0216 14:55:01.993629 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:01 crc kubenswrapper[4748]: E0216 14:55:01.994394 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:55:01 crc kubenswrapper[4748]: I0216 14:55:01.994941 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.779171 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/3.log" Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.782050 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerStarted","Data":"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d"} Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.782541 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.829935 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podStartSLOduration=107.829909272 podStartE2EDuration="1m47.829909272s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:02.828250501 +0000 UTC m=+128.519919550" watchObservedRunningTime="2026-02-16 14:55:02.829909272 +0000 UTC m=+128.521578311" Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.963837 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lll47"] Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.964067 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:02 crc kubenswrapper[4748]: E0216 14:55:02.964417 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.993851 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.993933 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:02 crc kubenswrapper[4748]: I0216 14:55:02.993851 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:02 crc kubenswrapper[4748]: E0216 14:55:02.994018 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:02 crc kubenswrapper[4748]: E0216 14:55:02.994161 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:02 crc kubenswrapper[4748]: E0216 14:55:02.994308 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:03 crc kubenswrapper[4748]: I0216 14:55:03.995185 4748 scope.go:117] "RemoveContainer" containerID="0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659" Feb 16 14:55:04 crc kubenswrapper[4748]: I0216 14:55:04.794819 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/1.log" Feb 16 14:55:04 crc kubenswrapper[4748]: I0216 14:55:04.794920 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerStarted","Data":"d9bd7e0c3f7c247fb4b65a2732e952b3dbcbebd8f74cfd56d321817b97bdd8bb"} Feb 16 14:55:04 crc kubenswrapper[4748]: I0216 14:55:04.993875 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:04 crc kubenswrapper[4748]: I0216 14:55:04.993949 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:04 crc kubenswrapper[4748]: I0216 14:55:04.993982 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:04 crc kubenswrapper[4748]: I0216 14:55:04.994076 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:04 crc kubenswrapper[4748]: E0216 14:55:04.996249 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:04 crc kubenswrapper[4748]: E0216 14:55:04.996427 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:04 crc kubenswrapper[4748]: E0216 14:55:04.996623 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:04 crc kubenswrapper[4748]: E0216 14:55:04.996831 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:55:05 crc kubenswrapper[4748]: E0216 14:55:05.115754 4748 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 14:55:06 crc kubenswrapper[4748]: I0216 14:55:06.993690 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:06 crc kubenswrapper[4748]: I0216 14:55:06.993765 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:06 crc kubenswrapper[4748]: I0216 14:55:06.993799 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:06 crc kubenswrapper[4748]: I0216 14:55:06.993843 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:06 crc kubenswrapper[4748]: E0216 14:55:06.994154 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:06 crc kubenswrapper[4748]: E0216 14:55:06.994217 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:06 crc kubenswrapper[4748]: E0216 14:55:06.994293 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:55:06 crc kubenswrapper[4748]: E0216 14:55:06.994358 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:08 crc kubenswrapper[4748]: I0216 14:55:08.994204 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:08 crc kubenswrapper[4748]: I0216 14:55:08.994240 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:08 crc kubenswrapper[4748]: I0216 14:55:08.994204 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:08 crc kubenswrapper[4748]: E0216 14:55:08.994418 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lll47" podUID="078f98ca-d871-47a5-96c3-1e818312c4c4" Feb 16 14:55:08 crc kubenswrapper[4748]: I0216 14:55:08.994500 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:08 crc kubenswrapper[4748]: E0216 14:55:08.994683 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 16 14:55:08 crc kubenswrapper[4748]: E0216 14:55:08.994746 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 16 14:55:08 crc kubenswrapper[4748]: E0216 14:55:08.994837 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.993873 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.994853 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.995183 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.995475 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.997980 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.998656 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.998678 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.998799 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 14:55:10 crc kubenswrapper[4748]: I0216 14:55:10.999203 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 14:55:11 crc kubenswrapper[4748]: I0216 14:55:11.000065 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 14:55:13 crc kubenswrapper[4748]: I0216 14:55:13.106435 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.380980 4748 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.434368 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.435163 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.435965 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fvh45"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.436872 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.439178 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.442051 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.442431 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.443112 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.443750 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z47nh"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.444568 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnhj8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.445173 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bw7n"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.445849 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.446679 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.447327 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.451994 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.452770 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.453488 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jqhqn"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.454246 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.455060 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.455585 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.463432 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.465173 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-clm9d"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.465826 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.466094 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.466855 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.467520 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.467668 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.467807 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.467921 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468043 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468253 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468317 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468383 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468497 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468610 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468677 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.468967 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.469183 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.469286 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.469515 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.469538 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.469636 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.469206 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.469942 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.477346 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.477365 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.478226 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.483605 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.492031 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.492211 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.515462 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.515653 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.515938 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516060 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516195 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516286 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516315 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516472 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516590 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516615 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516670 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z47nh"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.516822 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.517011 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.517430 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.517579 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.517853 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.518018 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.518534 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.519621 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.519826 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.519877 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.520407 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.520632 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.521000 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.521191 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.521865 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.522131 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.522178 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.522348 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.522556 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.523304 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527285 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527316 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527346 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527480 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527506 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527514 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527631 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527665 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527774 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bw7n"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527805 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.527848 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.528829 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.529494 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fvh45"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.529713 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.529777 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.529865 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.530004 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.530013 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.530116 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.530181 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.530233 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.530636 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.534151 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.536526 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-clm9d"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.537760 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.547864 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9x6qs"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.548933 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.548989 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.549314 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.549928 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.550709 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.551019 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.551576 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnhj8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.565485 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.571395 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.571843 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.573827 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-trusted-ca\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.573881 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad452651-e143-4074-8f39-d3074bc487ca-serving-cert\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.573931 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f4e162c-8f1b-4951-b339-d9ac41932f4c-machine-approver-tls\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.573959 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mck\" (UniqueName: \"kubernetes.io/projected/62e84868-6940-4784-8892-ac255eacc315-kube-api-access-x2mck\") pod \"cluster-samples-operator-665b6dd947-bf462\" (UID: \"62e84868-6940-4784-8892-ac255eacc315\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.573988 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-config\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574012 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-config\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574032 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-etcd-client\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574056 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d066b2b8-53ef-4936-ade8-6290469be4ff-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574084 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/263bd0a0-5043-48f7-a185-30ef874fc6e7-images\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574106 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574135 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574167 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-encryption-config\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574199 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ps2b\" (UniqueName: \"kubernetes.io/projected/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-kube-api-access-4ps2b\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574225 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7n92\" (UniqueName: \"kubernetes.io/projected/640e1720-ddc6-4148-8f90-a80375ca4187-kube-api-access-j7n92\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574251 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn9np\" (UniqueName: \"kubernetes.io/projected/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-kube-api-access-wn9np\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574278 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e168487f-8cf1-45d6-9f48-7e15e92f7c22-audit-dir\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574304 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-audit-dir\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574330 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-image-import-ca\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574362 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574387 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2szrk\" (UniqueName: \"kubernetes.io/projected/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-kube-api-access-2szrk\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574433 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574462 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-etcd-client\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574487 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-dir\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574518 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4e162c-8f1b-4951-b339-d9ac41932f4c-config\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574532 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-audit\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574561 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574584 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-serving-cert\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574600 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-config\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574649 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c228428-e7c4-4dc0-99d5-f90f0231ae29-serving-cert\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574669 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574692 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574726 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmv2q\" (UniqueName: \"kubernetes.io/projected/d066b2b8-53ef-4936-ade8-6290469be4ff-kube-api-access-zmv2q\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574748 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574769 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-serving-cert\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574789 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pm5b\" (UniqueName: \"kubernetes.io/projected/e168487f-8cf1-45d6-9f48-7e15e92f7c22-kube-api-access-7pm5b\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574805 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066b2b8-53ef-4936-ade8-6290469be4ff-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574825 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-policies\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574845 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574864 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574885 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574904 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574929 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574952 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5kl6\" (UniqueName: \"kubernetes.io/projected/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-kube-api-access-l5kl6\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574975 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.574997 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q29g2\" (UniqueName: \"kubernetes.io/projected/7c228428-e7c4-4dc0-99d5-f90f0231ae29-kube-api-access-q29g2\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575017 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575040 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/640e1720-ddc6-4148-8f90-a80375ca4187-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575058 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-node-pullsecrets\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575141 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-service-ca-bundle\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575173 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-config\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575195 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-encryption-config\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575220 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575252 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx986\" (UniqueName: \"kubernetes.io/projected/ad452651-e143-4074-8f39-d3074bc487ca-kube-api-access-qx986\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575278 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/263bd0a0-5043-48f7-a185-30ef874fc6e7-config\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575304 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vj4q\" (UniqueName: \"kubernetes.io/projected/2f4e162c-8f1b-4951-b339-d9ac41932f4c-kube-api-access-9vj4q\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575336 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-serving-cert\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575357 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62e84868-6940-4784-8892-ac255eacc315-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bf462\" (UID: \"62e84868-6940-4784-8892-ac255eacc315\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575383 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575404 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-audit-policies\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575423 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/640e1720-ddc6-4148-8f90-a80375ca4187-serving-cert\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575442 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f4e162c-8f1b-4951-b339-d9ac41932f4c-auth-proxy-config\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575461 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmnvj\" (UniqueName: \"kubernetes.io/projected/263bd0a0-5043-48f7-a185-30ef874fc6e7-kube-api-access-qmnvj\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575492 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/263bd0a0-5043-48f7-a185-30ef874fc6e7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575509 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575525 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-client-ca\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.575551 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.576121 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.576564 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.577476 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.578488 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.578672 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-fs2p7"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.579515 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.604025 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.605154 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.605608 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.605941 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x89l2"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.606293 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.606671 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.607043 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fs2p7" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.607240 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.607397 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.607595 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.607960 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.614142 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.614346 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.614475 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.614602 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615183 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615322 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615435 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615575 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615642 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615805 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615851 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.615781 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.618901 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.621391 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4xpkc"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.621745 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.622150 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.622459 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-26scw"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.622604 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.622753 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.623258 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.623359 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jqhqn"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.623440 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.623649 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.623664 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.623895 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.626353 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9x6qs"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.628976 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.629650 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.630124 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fs2p7"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.631354 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.632451 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xpkc"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.634002 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.636173 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.636208 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-hqs5w"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.637387 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.637640 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.638075 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.638547 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.638901 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.639425 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.640577 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qm9b7"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.641021 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.648898 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.650567 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.651362 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.651829 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.653156 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.653775 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.655169 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.655888 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.656436 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4m6k7"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.656903 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.657141 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.657992 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-k2pmm"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.658523 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.659321 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.660105 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.660622 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.660944 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.662149 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.662632 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.663392 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.663985 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.665561 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-glrmv"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676511 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmnvj\" (UniqueName: \"kubernetes.io/projected/263bd0a0-5043-48f7-a185-30ef874fc6e7-kube-api-access-qmnvj\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676559 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/263bd0a0-5043-48f7-a185-30ef874fc6e7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676595 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676622 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-client-ca\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676642 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676666 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-trusted-ca\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676698 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad452651-e143-4074-8f39-d3074bc487ca-serving-cert\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676761 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2mck\" (UniqueName: \"kubernetes.io/projected/62e84868-6940-4784-8892-ac255eacc315-kube-api-access-x2mck\") pod \"cluster-samples-operator-665b6dd947-bf462\" (UID: \"62e84868-6940-4784-8892-ac255eacc315\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676789 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f4e162c-8f1b-4951-b339-d9ac41932f4c-machine-approver-tls\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676814 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-config\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676836 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-etcd-client\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676854 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d066b2b8-53ef-4936-ade8-6290469be4ff-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676877 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/263bd0a0-5043-48f7-a185-30ef874fc6e7-images\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676900 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-config\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676923 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676943 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676968 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-encryption-config\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.676993 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3886887-0bd5-4b76-ad85-cf0f065a997b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677020 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7n92\" (UniqueName: \"kubernetes.io/projected/640e1720-ddc6-4148-8f90-a80375ca4187-kube-api-access-j7n92\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677040 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn9np\" (UniqueName: \"kubernetes.io/projected/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-kube-api-access-wn9np\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677057 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ps2b\" (UniqueName: \"kubernetes.io/projected/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-kube-api-access-4ps2b\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677078 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e168487f-8cf1-45d6-9f48-7e15e92f7c22-audit-dir\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677095 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-audit-dir\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677114 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-image-import-ca\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677130 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677153 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2szrk\" (UniqueName: \"kubernetes.io/projected/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-kube-api-access-2szrk\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677185 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677202 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-etcd-client\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677221 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-dir\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677248 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-audit\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677267 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4e162c-8f1b-4951-b339-d9ac41932f4c-config\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677297 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677316 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-serving-cert\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-config\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677359 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c228428-e7c4-4dc0-99d5-f90f0231ae29-serving-cert\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677381 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677399 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677422 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677443 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3886887-0bd5-4b76-ad85-cf0f065a997b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677466 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-serving-cert\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677486 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmv2q\" (UniqueName: \"kubernetes.io/projected/d066b2b8-53ef-4936-ade8-6290469be4ff-kube-api-access-zmv2q\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677518 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pm5b\" (UniqueName: \"kubernetes.io/projected/e168487f-8cf1-45d6-9f48-7e15e92f7c22-kube-api-access-7pm5b\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677541 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066b2b8-53ef-4936-ade8-6290469be4ff-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677561 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-policies\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677584 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677602 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677623 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677644 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677666 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5kl6\" (UniqueName: \"kubernetes.io/projected/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-kube-api-access-l5kl6\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677682 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677703 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677740 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q29g2\" (UniqueName: \"kubernetes.io/projected/7c228428-e7c4-4dc0-99d5-f90f0231ae29-kube-api-access-q29g2\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677762 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677782 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/640e1720-ddc6-4148-8f90-a80375ca4187-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677803 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-node-pullsecrets\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677831 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-service-ca-bundle\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677858 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-config\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677887 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3886887-0bd5-4b76-ad85-cf0f065a997b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677906 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-encryption-config\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677929 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx986\" (UniqueName: \"kubernetes.io/projected/ad452651-e143-4074-8f39-d3074bc487ca-kube-api-access-qx986\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677950 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677972 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/263bd0a0-5043-48f7-a185-30ef874fc6e7-config\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.677994 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vj4q\" (UniqueName: \"kubernetes.io/projected/2f4e162c-8f1b-4951-b339-d9ac41932f4c-kube-api-access-9vj4q\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.678030 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-serving-cert\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.678052 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.678072 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-audit-policies\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.678094 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/640e1720-ddc6-4148-8f90-a80375ca4187-serving-cert\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.678115 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f4e162c-8f1b-4951-b339-d9ac41932f4c-auth-proxy-config\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.678142 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62e84868-6940-4784-8892-ac255eacc315-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bf462\" (UID: \"62e84868-6940-4784-8892-ac255eacc315\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.678455 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.679142 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.680989 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.681367 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.682591 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.683268 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.684049 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/263bd0a0-5043-48f7-a185-30ef874fc6e7-images\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.684398 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.685300 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.687331 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.688141 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.688640 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-encryption-config\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.689662 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d066b2b8-53ef-4936-ade8-6290469be4ff-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.689788 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/263bd0a0-5043-48f7-a185-30ef874fc6e7-config\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.690614 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.691183 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-config\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.693289 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c228428-e7c4-4dc0-99d5-f90f0231ae29-serving-cert\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.693798 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f4e162c-8f1b-4951-b339-d9ac41932f4c-config\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.694241 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-etcd-serving-ca\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.695350 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-config\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.696171 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.696824 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-dir\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.696913 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.697224 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-node-pullsecrets\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.697751 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-trusted-ca-bundle\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.698174 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-client-ca\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.698223 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/640e1720-ddc6-4148-8f90-a80375ca4187-available-featuregates\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.698417 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-config\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.699124 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c228428-e7c4-4dc0-99d5-f90f0231ae29-service-ca-bundle\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.699862 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-audit\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.700234 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.700908 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-trusted-ca\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.701403 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.701983 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.702196 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.702263 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.702324 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-audit-dir\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.702368 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e168487f-8cf1-45d6-9f48-7e15e92f7c22-audit-dir\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.702381 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.702617 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.702862 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x89l2"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.703266 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-encryption-config\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.703295 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-etcd-client\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.704000 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2f4e162c-8f1b-4951-b339-d9ac41932f4c-machine-approver-tls\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.704453 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-image-import-ca\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.704688 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-config\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.705034 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.705197 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2f4e162c-8f1b-4951-b339-d9ac41932f4c-auth-proxy-config\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.705422 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.705568 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/e168487f-8cf1-45d6-9f48-7e15e92f7c22-audit-policies\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.705774 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad452651-e143-4074-8f39-d3074bc487ca-serving-cert\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.705924 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-etcd-client\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.706179 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-policies\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.706920 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.708114 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-serving-cert\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.708167 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/62e84868-6940-4784-8892-ac255eacc315-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-bf462\" (UID: \"62e84868-6940-4784-8892-ac255eacc315\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.708273 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d066b2b8-53ef-4936-ade8-6290469be4ff-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.708766 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.712152 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-26scw"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.713446 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e168487f-8cf1-45d6-9f48-7e15e92f7c22-serving-cert\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.715638 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/263bd0a0-5043-48f7-a185-30ef874fc6e7-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.717940 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.718122 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.719470 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-serving-cert\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.719602 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.720673 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.721688 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/640e1720-ddc6-4148-8f90-a80375ca4187-serving-cert\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.721788 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.723057 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.724798 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k2pmm"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.728246 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.729536 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4m6k7"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.730643 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.732745 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-x4th6"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.733563 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.733802 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6hfsw"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.734696 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.735046 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qm9b7"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.736150 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.737513 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.737725 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.738411 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.739473 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.740486 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-glrmv"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.741538 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x4th6"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.744114 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rn626"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.745003 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rn626"] Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.745146 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.757588 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.777884 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.778749 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3886887-0bd5-4b76-ad85-cf0f065a997b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.778853 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3886887-0bd5-4b76-ad85-cf0f065a997b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.778896 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3886887-0bd5-4b76-ad85-cf0f065a997b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.797378 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.817888 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.837340 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.857454 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.862830 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3886887-0bd5-4b76-ad85-cf0f065a997b-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.879514 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.897397 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.900071 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3886887-0bd5-4b76-ad85-cf0f065a997b-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.917309 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.938627 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.958133 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.977389 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 14:55:14 crc kubenswrapper[4748]: I0216 14:55:14.997448 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.018734 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.038247 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.082145 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.084617 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.097948 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.117762 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.138424 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.158245 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.177306 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.198404 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.217547 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.237757 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.257644 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.285578 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.316963 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.339297 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.378385 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.399268 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.417905 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.438309 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.457382 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.478735 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.497306 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.518636 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.538259 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.559004 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.578594 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.598213 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.619080 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.638377 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.655787 4748 request.go:700] Waited for 1.016067698s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-service-ca-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.657678 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.677918 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.698866 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.718420 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.750342 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.759226 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.778112 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.797559 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.818361 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.837679 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.857640 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.879106 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.898125 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.918636 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.938127 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.959594 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.978336 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 14:55:15 crc kubenswrapper[4748]: I0216 14:55:15.998142 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.018776 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.037455 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.057765 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.078077 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.098701 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.118130 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.138548 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.159389 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.177695 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.198833 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.219066 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.238169 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.258313 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.279513 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.298214 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.317693 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.357744 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.367764 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vj4q\" (UniqueName: \"kubernetes.io/projected/2f4e162c-8f1b-4951-b339-d9ac41932f4c-kube-api-access-9vj4q\") pod \"machine-approver-56656f9798-6jhx4\" (UID: \"2f4e162c-8f1b-4951-b339-d9ac41932f4c\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.377883 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.429791 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5kl6\" (UniqueName: \"kubernetes.io/projected/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-kube-api-access-l5kl6\") pod \"oauth-openshift-558db77b4-2bw7n\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.450007 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmnvj\" (UniqueName: \"kubernetes.io/projected/263bd0a0-5043-48f7-a185-30ef874fc6e7-kube-api-access-qmnvj\") pod \"machine-api-operator-5694c8668f-z47nh\" (UID: \"263bd0a0-5043-48f7-a185-30ef874fc6e7\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.467579 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q29g2\" (UniqueName: \"kubernetes.io/projected/7c228428-e7c4-4dc0-99d5-f90f0231ae29-kube-api-access-q29g2\") pod \"authentication-operator-69f744f599-jqhqn\" (UID: \"7c228428-e7c4-4dc0-99d5-f90f0231ae29\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.492145 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmv2q\" (UniqueName: \"kubernetes.io/projected/d066b2b8-53ef-4936-ade8-6290469be4ff-kube-api-access-zmv2q\") pod \"openshift-apiserver-operator-796bbdcf4f-s48fz\" (UID: \"d066b2b8-53ef-4936-ade8-6290469be4ff\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.497000 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pm5b\" (UniqueName: \"kubernetes.io/projected/e168487f-8cf1-45d6-9f48-7e15e92f7c22-kube-api-access-7pm5b\") pod \"apiserver-7bbb656c7d-6wx2n\" (UID: \"e168487f-8cf1-45d6-9f48-7e15e92f7c22\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.513755 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2szrk\" (UniqueName: \"kubernetes.io/projected/75a73d6b-7fa7-483c-82f8-b2fa2d1a4906-kube-api-access-2szrk\") pod \"openshift-controller-manager-operator-756b6f6bc6-42ljz\" (UID: \"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.532585 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ps2b\" (UniqueName: \"kubernetes.io/projected/b1c29b40-2c74-45be-bdf6-2d77ccc9c6da-kube-api-access-4ps2b\") pod \"apiserver-76f77b778f-fvh45\" (UID: \"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da\") " pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.543089 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.552329 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7n92\" (UniqueName: \"kubernetes.io/projected/640e1720-ddc6-4148-8f90-a80375ca4187-kube-api-access-j7n92\") pod \"openshift-config-operator-7777fb866f-pfvsx\" (UID: \"640e1720-ddc6-4148-8f90-a80375ca4187\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.568470 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.575995 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn9np\" (UniqueName: \"kubernetes.io/projected/f889a3dd-c5ed-49f7-98b5-0cafd33399a4-kube-api-access-wn9np\") pod \"console-operator-58897d9998-clm9d\" (UID: \"f889a3dd-c5ed-49f7-98b5-0cafd33399a4\") " pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.579820 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.594528 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2mck\" (UniqueName: \"kubernetes.io/projected/62e84868-6940-4784-8892-ac255eacc315-kube-api-access-x2mck\") pod \"cluster-samples-operator-665b6dd947-bf462\" (UID: \"62e84868-6940-4784-8892-ac255eacc315\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:16 crc kubenswrapper[4748]: W0216 14:55:16.595351 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f4e162c_8f1b_4951_b339_d9ac41932f4c.slice/crio-e24ad4232bf644b24e1288ef5e30d3a5433650252dec346148681978d52cac0d WatchSource:0}: Error finding container e24ad4232bf644b24e1288ef5e30d3a5433650252dec346148681978d52cac0d: Status 404 returned error can't find the container with id e24ad4232bf644b24e1288ef5e30d3a5433650252dec346148681978d52cac0d Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.601505 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.613705 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx986\" (UniqueName: \"kubernetes.io/projected/ad452651-e143-4074-8f39-d3074bc487ca-kube-api-access-qx986\") pod \"controller-manager-879f6c89f-hnhj8\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.620032 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.621460 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.638985 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.649679 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.659902 4748 request.go:700] Waited for 1.926079348s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.662552 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.673829 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.679261 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.698228 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.708181 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.718222 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.719355 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.737353 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.756774 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.758708 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.778772 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.805281 4748 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.834942 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n"] Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.846597 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" event={"ID":"2f4e162c-8f1b-4951-b339-d9ac41932f4c","Type":"ContainerStarted","Data":"e24ad4232bf644b24e1288ef5e30d3a5433650252dec346148681978d52cac0d"} Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.846887 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3886887-0bd5-4b76-ad85-cf0f065a997b-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-659gc\" (UID: \"b3886887-0bd5-4b76-ad85-cf0f065a997b\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.853974 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.860706 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" Feb 16 14:55:16 crc kubenswrapper[4748]: W0216 14:55:16.879761 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode168487f_8cf1_45d6_9f48_7e15e92f7c22.slice/crio-9af0a8063e420c7181226dc7d14da5cb95ae6c810992efa07e12399aa67c38ab WatchSource:0}: Error finding container 9af0a8063e420c7181226dc7d14da5cb95ae6c810992efa07e12399aa67c38ab: Status 404 returned error can't find the container with id 9af0a8063e420c7181226dc7d14da5cb95ae6c810992efa07e12399aa67c38ab Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.910512 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz"] Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911291 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ef14221-89ef-4843-af97-142575e3284f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911313 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjv2w\" (UniqueName: \"kubernetes.io/projected/1b6ee71e-062d-49b9-b693-665355764e4f-kube-api-access-mjv2w\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911343 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8f857-9981-4b16-93df-4200012ca322-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911363 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/a695a4a3-bc60-4df5-8058-8986ba958074-kube-api-access-t4gl5\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911381 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw7r4\" (UniqueName: \"kubernetes.io/projected/d6ee3265-356a-4eb2-afd0-c72976719909-kube-api-access-pw7r4\") pod \"dns-operator-744455d44c-9x6qs\" (UID: \"d6ee3265-356a-4eb2-afd0-c72976719909\") " pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911437 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj5wl\" (UniqueName: \"kubernetes.io/projected/0491318b-bf9a-46a2-b262-8aaf5f3061f9-kube-api-access-lj5wl\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911462 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-oauth-serving-cert\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911487 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-trusted-ca\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911507 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-serving-cert\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911521 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-console-config\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911547 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8f857-9981-4b16-93df-4200012ca322-config\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911603 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1005279f-a20b-43cd-957a-731252114f31-serving-cert\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911621 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0491318b-bf9a-46a2-b262-8aaf5f3061f9-metrics-tls\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911647 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14221-89ef-4843-af97-142575e3284f-config\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911731 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-config\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911747 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-trusted-ca-bundle\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911765 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6ee3265-356a-4eb2-afd0-c72976719909-metrics-tls\") pod \"dns-operator-744455d44c-9x6qs\" (UID: \"d6ee3265-356a-4eb2-afd0-c72976719909\") " pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911831 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-registry-certificates\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911861 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/487c3d9a-c413-417d-ada8-cba0e38f0633-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911892 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-ca\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.911907 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-service-ca\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916368 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a480239a-6f26-4189-9a3c-17896449a6e3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916406 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0491318b-bf9a-46a2-b262-8aaf5f3061f9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916454 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/487c3d9a-c413-417d-ada8-cba0e38f0633-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916510 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-client\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916529 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dff8f857-9981-4b16-93df-4200012ca322-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916546 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-bound-sa-token\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916560 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcxjr\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-kube-api-access-dcxjr\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916619 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-client-ca\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916686 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ef14221-89ef-4843-af97-142575e3284f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.916739 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-service-ca\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.917357 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdqp9\" (UniqueName: \"kubernetes.io/projected/487c3d9a-c413-417d-ada8-cba0e38f0633-kube-api-access-qdqp9\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.917784 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-registry-tls\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.917830 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-config\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.918306 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nnlh\" (UniqueName: \"kubernetes.io/projected/8611553d-e8b4-4f6e-a2c0-abb19409cd02-kube-api-access-8nnlh\") pod \"downloads-7954f5f757-fs2p7\" (UID: \"8611553d-e8b4-4f6e-a2c0-abb19409cd02\") " pod="openshift-console/downloads-7954f5f757-fs2p7" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.918346 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-oauth-config\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.918367 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/487c3d9a-c413-417d-ada8-cba0e38f0633-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.918418 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.918442 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a695a4a3-bc60-4df5-8058-8986ba958074-serving-cert\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.918487 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0491318b-bf9a-46a2-b262-8aaf5f3061f9-trusted-ca\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:16 crc kubenswrapper[4748]: E0216 14:55:16.918729 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.418701787 +0000 UTC m=+143.110370896 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.919182 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tztz8\" (UniqueName: \"kubernetes.io/projected/1005279f-a20b-43cd-957a-731252114f31-kube-api-access-tztz8\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.919243 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a480239a-6f26-4189-9a3c-17896449a6e3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.930041 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" Feb 16 14:55:16 crc kubenswrapper[4748]: W0216 14:55:16.930810 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a73d6b_7fa7_483c_82f8_b2fa2d1a4906.slice/crio-9e2afb4386773b2ae732882c2e8b8fac9191cce495eecc8770235e8898adbb5e WatchSource:0}: Error finding container 9e2afb4386773b2ae732882c2e8b8fac9191cce495eecc8770235e8898adbb5e: Status 404 returned error can't find the container with id 9e2afb4386773b2ae732882c2e8b8fac9191cce495eecc8770235e8898adbb5e Feb 16 14:55:16 crc kubenswrapper[4748]: I0216 14:55:16.939788 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jqhqn"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020427 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020780 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-service-ca\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020816 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxnjn\" (UniqueName: \"kubernetes.io/projected/914ad338-1e06-47bd-8338-1ac0aeae7acd-kube-api-access-dxnjn\") pod \"ingress-canary-x4th6\" (UID: \"914ad338-1e06-47bd-8338-1ac0aeae7acd\") " pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020836 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-mountpoint-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020873 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86e29be3-d9b1-46cb-be88-5cdfab20b770-srv-cert\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020891 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0491318b-bf9a-46a2-b262-8aaf5f3061f9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020910 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/487c3d9a-c413-417d-ada8-cba0e38f0633-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020956 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-client\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020973 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-socket-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.020991 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86e29be3-d9b1-46cb-be88-5cdfab20b770-profile-collector-cert\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021038 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-client-ca\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021061 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzdgp\" (UniqueName: \"kubernetes.io/projected/d4609715-d418-4f84-843a-b916f5e920ec-kube-api-access-lzdgp\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021117 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4609715-d418-4f84-843a-b916f5e920ec-secret-volume\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021136 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fafd52-27d8-4936-87da-81bbf875738e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021153 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-service-ca\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021189 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ef14221-89ef-4843-af97-142575e3284f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021210 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t29rq\" (UID: \"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021226 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-srv-cert\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021262 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nnlh\" (UniqueName: \"kubernetes.io/projected/8611553d-e8b4-4f6e-a2c0-abb19409cd02-kube-api-access-8nnlh\") pod \"downloads-7954f5f757-fs2p7\" (UID: \"8611553d-e8b4-4f6e-a2c0-abb19409cd02\") " pod="openshift-console/downloads-7954f5f757-fs2p7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021289 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0491318b-bf9a-46a2-b262-8aaf5f3061f9-trusted-ca\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021305 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-csi-data-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021363 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5613c863-4492-4b96-8045-c520e1a45ff1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021384 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt95t\" (UniqueName: \"kubernetes.io/projected/a6db781b-d9b3-4c44-a364-820b8dded174-kube-api-access-dt95t\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021427 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ae571ba-1425-40a1-93f7-498609d03860-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-glrmv\" (UID: \"8ae571ba-1425-40a1-93f7-498609d03860\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021444 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-registration-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021459 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4609715-d418-4f84-843a-b916f5e920ec-config-volume\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021475 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6db781b-d9b3-4c44-a364-820b8dded174-serving-cert\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021511 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc5q5\" (UniqueName: \"kubernetes.io/projected/a70b40e2-8e35-4633-bc3c-2450a0df944c-kube-api-access-tc5q5\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021543 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f8bb5-da9d-4994-8822-9a1755622d96-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j6s5s\" (UID: \"ab3f8bb5-da9d-4994-8822-9a1755622d96\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021584 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-webhook-cert\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021603 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/a695a4a3-bc60-4df5-8058-8986ba958074-kube-api-access-t4gl5\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021620 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8f857-9981-4b16-93df-4200012ca322-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021655 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-trusted-ca\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021670 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-console-config\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021692 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8f857-9981-4b16-93df-4200012ca322-config\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021707 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5613c863-4492-4b96-8045-c520e1a45ff1-proxy-tls\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021753 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-metrics-certs\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021771 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/82faed60-15dd-4f58-a71e-5a46a1348e2e-signing-cabundle\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021787 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krbm\" (UniqueName: \"kubernetes.io/projected/4090e7b0-fd01-4592-bf29-78649fde005e-kube-api-access-4krbm\") pod \"migrator-59844c95c7-gw9l8\" (UID: \"4090e7b0-fd01-4592-bf29-78649fde005e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021828 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1005279f-a20b-43cd-957a-731252114f31-serving-cert\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021846 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0491318b-bf9a-46a2-b262-8aaf5f3061f9-metrics-tls\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021862 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14221-89ef-4843-af97-142575e3284f-config\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021897 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pwkm\" (UniqueName: \"kubernetes.io/projected/c07fb7ce-2a3a-4f95-a70a-70655731e922-kube-api-access-2pwkm\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021913 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgzml\" (UniqueName: \"kubernetes.io/projected/84a8d805-58ad-4365-a405-7ab625c4c1a1-kube-api-access-vgzml\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021927 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqvp2\" (UniqueName: \"kubernetes.io/projected/86e29be3-d9b1-46cb-be88-5cdfab20b770-kube-api-access-cqvp2\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021944 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-config\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.021978 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-trusted-ca-bundle\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022028 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022067 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-registry-certificates\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022082 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/487c3d9a-c413-417d-ada8-cba0e38f0633-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022098 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-default-certificate\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022140 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-ca\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022158 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-stats-auth\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022173 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c07fb7ce-2a3a-4f95-a70a-70655731e922-node-bootstrap-token\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022190 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxnsk\" (UniqueName: \"kubernetes.io/projected/34a812e6-7d17-4c40-a9ab-376ef6ab5001-kube-api-access-qxnsk\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022229 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a480239a-6f26-4189-9a3c-17896449a6e3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022246 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dff8f857-9981-4b16-93df-4200012ca322-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022266 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxkhs\" (UniqueName: \"kubernetes.io/projected/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-kube-api-access-mxkhs\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022306 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-bound-sa-token\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022325 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-288kd\" (UniqueName: \"kubernetes.io/projected/82faed60-15dd-4f58-a71e-5a46a1348e2e-kube-api-access-288kd\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022343 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5613c863-4492-4b96-8045-c520e1a45ff1-images\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.022419 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.522377187 +0000 UTC m=+143.214046236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022489 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxjpv\" (UniqueName: \"kubernetes.io/projected/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-kube-api-access-cxjpv\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022536 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcxjr\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-kube-api-access-dcxjr\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022564 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/82faed60-15dd-4f58-a71e-5a46a1348e2e-signing-key\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022586 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022608 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022647 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhscz\" (UniqueName: \"kubernetes.io/projected/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-kube-api-access-hhscz\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022670 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fafd52-27d8-4936-87da-81bbf875738e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022690 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a70b40e2-8e35-4633-bc3c-2450a0df944c-config-volume\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022749 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdqp9\" (UniqueName: \"kubernetes.io/projected/487c3d9a-c413-417d-ada8-cba0e38f0633-kube-api-access-qdqp9\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022768 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-proxy-tls\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022792 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-tmpfs\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022811 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhwc8\" (UniqueName: \"kubernetes.io/projected/5613c863-4492-4b96-8045-c520e1a45ff1-kube-api-access-rhwc8\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022828 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwxp7\" (UniqueName: \"kubernetes.io/projected/77fafd52-27d8-4936-87da-81bbf875738e-kube-api-access-dwxp7\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022853 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpxfn\" (UniqueName: \"kubernetes.io/projected/ab3f8bb5-da9d-4994-8822-9a1755622d96-kube-api-access-vpxfn\") pod \"package-server-manager-789f6589d5-j6s5s\" (UID: \"ab3f8bb5-da9d-4994-8822-9a1755622d96\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022899 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-registry-tls\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.022938 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-config\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024316 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-oauth-config\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024362 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/487c3d9a-c413-417d-ada8-cba0e38f0633-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024410 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a695a4a3-bc60-4df5-8058-8986ba958074-serving-cert\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024445 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tztz8\" (UniqueName: \"kubernetes.io/projected/1005279f-a20b-43cd-957a-731252114f31-kube-api-access-tztz8\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024477 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-plugins-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024506 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a480239a-6f26-4189-9a3c-17896449a6e3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024530 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prxzm\" (UniqueName: \"kubernetes.io/projected/8ae571ba-1425-40a1-93f7-498609d03860-kube-api-access-prxzm\") pod \"multus-admission-controller-857f4d67dd-glrmv\" (UID: \"8ae571ba-1425-40a1-93f7-498609d03860\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024560 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjv2w\" (UniqueName: \"kubernetes.io/projected/1b6ee71e-062d-49b9-b693-665355764e4f-kube-api-access-mjv2w\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024577 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a874c790-da94-4e03-a484-33c6f9126664-service-ca-bundle\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024596 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ef14221-89ef-4843-af97-142575e3284f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024614 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw7r4\" (UniqueName: \"kubernetes.io/projected/d6ee3265-356a-4eb2-afd0-c72976719909-kube-api-access-pw7r4\") pod \"dns-operator-744455d44c-9x6qs\" (UID: \"d6ee3265-356a-4eb2-afd0-c72976719909\") " pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024648 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c07fb7ce-2a3a-4f95-a70a-70655731e922-certs\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024666 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6db781b-d9b3-4c44-a364-820b8dded174-config\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024689 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj5wl\" (UniqueName: \"kubernetes.io/projected/0491318b-bf9a-46a2-b262-8aaf5f3061f9-kube-api-access-lj5wl\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024725 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-oauth-serving-cert\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024743 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a70b40e2-8e35-4633-bc3c-2450a0df944c-metrics-tls\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024768 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-serving-cert\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024788 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.024806 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9btw\" (UniqueName: \"kubernetes.io/projected/a874c790-da94-4e03-a484-33c6f9126664-kube-api-access-x9btw\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.025052 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjnrd\" (UniqueName: \"kubernetes.io/projected/d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44-kube-api-access-kjnrd\") pod \"control-plane-machine-set-operator-78cbb6b69f-t29rq\" (UID: \"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.025120 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.025159 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6ee3265-356a-4eb2-afd0-c72976719909-metrics-tls\") pod \"dns-operator-744455d44c-9x6qs\" (UID: \"d6ee3265-356a-4eb2-afd0-c72976719909\") " pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.025182 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/914ad338-1e06-47bd-8338-1ac0aeae7acd-cert\") pod \"ingress-canary-x4th6\" (UID: \"914ad338-1e06-47bd-8338-1ac0aeae7acd\") " pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.025817 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-oauth-serving-cert\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.026376 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-config\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.028758 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-config\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.030329 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0491318b-bf9a-46a2-b262-8aaf5f3061f9-trusted-ca\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.031039 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-trusted-ca-bundle\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.031594 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-ca\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.032030 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/487c3d9a-c413-417d-ada8-cba0e38f0633-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.032914 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-registry-certificates\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.033673 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-service-ca\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.035191 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1ef14221-89ef-4843-af97-142575e3284f-config\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.038393 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a480239a-6f26-4189-9a3c-17896449a6e3-ca-trust-extracted\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.039540 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-console-config\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.039952 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-trusted-ca\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.040662 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dff8f857-9981-4b16-93df-4200012ca322-config\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.041173 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-client-ca\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.041543 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-service-ca\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.045383 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-oauth-config\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.045945 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-registry-tls\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.055310 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a695a4a3-bc60-4df5-8058-8986ba958074-etcd-client\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.057405 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-serving-cert\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.059287 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/487c3d9a-c413-417d-ada8-cba0e38f0633-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.060678 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1ef14221-89ef-4843-af97-142575e3284f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.060831 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a480239a-6f26-4189-9a3c-17896449a6e3-installation-pull-secrets\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.060876 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a695a4a3-bc60-4df5-8058-8986ba958074-serving-cert\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.061118 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dff8f857-9981-4b16-93df-4200012ca322-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.064926 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d6ee3265-356a-4eb2-afd0-c72976719909-metrics-tls\") pod \"dns-operator-744455d44c-9x6qs\" (UID: \"d6ee3265-356a-4eb2-afd0-c72976719909\") " pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.066495 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1005279f-a20b-43cd-957a-731252114f31-serving-cert\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.067561 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdqp9\" (UniqueName: \"kubernetes.io/projected/487c3d9a-c413-417d-ada8-cba0e38f0633-kube-api-access-qdqp9\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.069198 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0491318b-bf9a-46a2-b262-8aaf5f3061f9-metrics-tls\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.080298 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj5wl\" (UniqueName: \"kubernetes.io/projected/0491318b-bf9a-46a2-b262-8aaf5f3061f9-kube-api-access-lj5wl\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.093800 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0491318b-bf9a-46a2-b262-8aaf5f3061f9-bound-sa-token\") pod \"ingress-operator-5b745b69d9-gbrhq\" (UID: \"0491318b-bf9a-46a2-b262-8aaf5f3061f9\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.120395 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gl5\" (UniqueName: \"kubernetes.io/projected/a695a4a3-bc60-4df5-8058-8986ba958074-kube-api-access-t4gl5\") pod \"etcd-operator-b45778765-x89l2\" (UID: \"a695a4a3-bc60-4df5-8058-8986ba958074\") " pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126025 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-plugins-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126058 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prxzm\" (UniqueName: \"kubernetes.io/projected/8ae571ba-1425-40a1-93f7-498609d03860-kube-api-access-prxzm\") pod \"multus-admission-controller-857f4d67dd-glrmv\" (UID: \"8ae571ba-1425-40a1-93f7-498609d03860\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126084 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a874c790-da94-4e03-a484-33c6f9126664-service-ca-bundle\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126115 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c07fb7ce-2a3a-4f95-a70a-70655731e922-certs\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126132 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6db781b-d9b3-4c44-a364-820b8dded174-config\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126155 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a70b40e2-8e35-4633-bc3c-2450a0df944c-metrics-tls\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126172 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126187 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9btw\" (UniqueName: \"kubernetes.io/projected/a874c790-da94-4e03-a484-33c6f9126664-kube-api-access-x9btw\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126206 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjnrd\" (UniqueName: \"kubernetes.io/projected/d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44-kube-api-access-kjnrd\") pod \"control-plane-machine-set-operator-78cbb6b69f-t29rq\" (UID: \"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126227 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126245 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/914ad338-1e06-47bd-8338-1ac0aeae7acd-cert\") pod \"ingress-canary-x4th6\" (UID: \"914ad338-1e06-47bd-8338-1ac0aeae7acd\") " pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126261 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxnjn\" (UniqueName: \"kubernetes.io/projected/914ad338-1e06-47bd-8338-1ac0aeae7acd-kube-api-access-dxnjn\") pod \"ingress-canary-x4th6\" (UID: \"914ad338-1e06-47bd-8338-1ac0aeae7acd\") " pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126277 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-mountpoint-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126294 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86e29be3-d9b1-46cb-be88-5cdfab20b770-srv-cert\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126311 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-socket-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126330 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86e29be3-d9b1-46cb-be88-5cdfab20b770-profile-collector-cert\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126349 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzdgp\" (UniqueName: \"kubernetes.io/projected/d4609715-d418-4f84-843a-b916f5e920ec-kube-api-access-lzdgp\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126367 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4609715-d418-4f84-843a-b916f5e920ec-secret-volume\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126384 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fafd52-27d8-4936-87da-81bbf875738e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126388 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-plugins-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126401 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t29rq\" (UID: \"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126421 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-srv-cert\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126462 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126478 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-csi-data-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126495 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5613c863-4492-4b96-8045-c520e1a45ff1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126512 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt95t\" (UniqueName: \"kubernetes.io/projected/a6db781b-d9b3-4c44-a364-820b8dded174-kube-api-access-dt95t\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126528 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4609715-d418-4f84-843a-b916f5e920ec-config-volume\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126541 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6db781b-d9b3-4c44-a364-820b8dded174-serving-cert\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126557 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ae571ba-1425-40a1-93f7-498609d03860-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-glrmv\" (UID: \"8ae571ba-1425-40a1-93f7-498609d03860\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126575 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-registration-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126590 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc5q5\" (UniqueName: \"kubernetes.io/projected/a70b40e2-8e35-4633-bc3c-2450a0df944c-kube-api-access-tc5q5\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126608 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f8bb5-da9d-4994-8822-9a1755622d96-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j6s5s\" (UID: \"ab3f8bb5-da9d-4994-8822-9a1755622d96\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126621 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-webhook-cert\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126655 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5613c863-4492-4b96-8045-c520e1a45ff1-proxy-tls\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126670 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-metrics-certs\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126693 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/82faed60-15dd-4f58-a71e-5a46a1348e2e-signing-cabundle\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126722 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krbm\" (UniqueName: \"kubernetes.io/projected/4090e7b0-fd01-4592-bf29-78649fde005e-kube-api-access-4krbm\") pod \"migrator-59844c95c7-gw9l8\" (UID: \"4090e7b0-fd01-4592-bf29-78649fde005e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126743 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqvp2\" (UniqueName: \"kubernetes.io/projected/86e29be3-d9b1-46cb-be88-5cdfab20b770-kube-api-access-cqvp2\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126761 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pwkm\" (UniqueName: \"kubernetes.io/projected/c07fb7ce-2a3a-4f95-a70a-70655731e922-kube-api-access-2pwkm\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126776 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgzml\" (UniqueName: \"kubernetes.io/projected/84a8d805-58ad-4365-a405-7ab625c4c1a1-kube-api-access-vgzml\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.126795 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127068 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-mountpoint-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127159 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-default-certificate\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127184 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-stats-auth\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127202 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c07fb7ce-2a3a-4f95-a70a-70655731e922-node-bootstrap-token\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127228 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxnsk\" (UniqueName: \"kubernetes.io/projected/34a812e6-7d17-4c40-a9ab-376ef6ab5001-kube-api-access-qxnsk\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127251 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxkhs\" (UniqueName: \"kubernetes.io/projected/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-kube-api-access-mxkhs\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127272 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-288kd\" (UniqueName: \"kubernetes.io/projected/82faed60-15dd-4f58-a71e-5a46a1348e2e-kube-api-access-288kd\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127291 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5613c863-4492-4b96-8045-c520e1a45ff1-images\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127311 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxjpv\" (UniqueName: \"kubernetes.io/projected/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-kube-api-access-cxjpv\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127362 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/82faed60-15dd-4f58-a71e-5a46a1348e2e-signing-key\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127378 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127395 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhscz\" (UniqueName: \"kubernetes.io/projected/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-kube-api-access-hhscz\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127414 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fafd52-27d8-4936-87da-81bbf875738e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127429 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a70b40e2-8e35-4633-bc3c-2450a0df944c-config-volume\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127447 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-proxy-tls\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127464 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpxfn\" (UniqueName: \"kubernetes.io/projected/ab3f8bb5-da9d-4994-8822-9a1755622d96-kube-api-access-vpxfn\") pod \"package-server-manager-789f6589d5-j6s5s\" (UID: \"ab3f8bb5-da9d-4994-8822-9a1755622d96\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127499 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-tmpfs\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127515 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhwc8\" (UniqueName: \"kubernetes.io/projected/5613c863-4492-4b96-8045-c520e1a45ff1-kube-api-access-rhwc8\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.127531 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwxp7\" (UniqueName: \"kubernetes.io/projected/77fafd52-27d8-4936-87da-81bbf875738e-kube-api-access-dwxp7\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.128360 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4609715-d418-4f84-843a-b916f5e920ec-config-volume\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.129046 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-socket-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.129465 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a874c790-da94-4e03-a484-33c6f9126664-service-ca-bundle\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.131097 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.631082793 +0000 UTC m=+143.322751832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.133440 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-csi-data-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.133587 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/82faed60-15dd-4f58-a71e-5a46a1348e2e-signing-cabundle\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.134141 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.135042 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.137319 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5613c863-4492-4b96-8045-c520e1a45ff1-images\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.137843 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-tmpfs\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.138098 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a70b40e2-8e35-4633-bc3c-2450a0df944c-config-volume\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.139785 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab3f8bb5-da9d-4994-8822-9a1755622d96-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-j6s5s\" (UID: \"ab3f8bb5-da9d-4994-8822-9a1755622d96\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.139820 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/c07fb7ce-2a3a-4f95-a70a-70655731e922-node-bootstrap-token\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.140121 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fafd52-27d8-4936-87da-81bbf875738e-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.141046 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77fafd52-27d8-4936-87da-81bbf875738e-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.141407 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/84a8d805-58ad-4365-a405-7ab625c4c1a1-registration-dir\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.141527 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/c07fb7ce-2a3a-4f95-a70a-70655731e922-certs\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.142048 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/914ad338-1e06-47bd-8338-1ac0aeae7acd-cert\") pod \"ingress-canary-x4th6\" (UID: \"914ad338-1e06-47bd-8338-1ac0aeae7acd\") " pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.142295 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5613c863-4492-4b96-8045-c520e1a45ff1-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.142346 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nnlh\" (UniqueName: \"kubernetes.io/projected/8611553d-e8b4-4f6e-a2c0-abb19409cd02-kube-api-access-8nnlh\") pod \"downloads-7954f5f757-fs2p7\" (UID: \"8611553d-e8b4-4f6e-a2c0-abb19409cd02\") " pod="openshift-console/downloads-7954f5f757-fs2p7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.142944 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/86e29be3-d9b1-46cb-be88-5cdfab20b770-profile-collector-cert\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.143036 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-t29rq\" (UID: \"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.143997 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6db781b-d9b3-4c44-a364-820b8dded174-config\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.145140 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-stats-auth\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.146327 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-apiservice-cert\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.147161 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.147629 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-webhook-cert\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.147646 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4609715-d418-4f84-843a-b916f5e920ec-secret-volume\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.147767 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-proxy-tls\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.147957 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5613c863-4492-4b96-8045-c520e1a45ff1-proxy-tls\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.148144 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-metrics-certs\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.148271 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-srv-cert\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.148429 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ae571ba-1425-40a1-93f7-498609d03860-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-glrmv\" (UID: \"8ae571ba-1425-40a1-93f7-498609d03860\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.149274 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-profile-collector-cert\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.149318 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/82faed60-15dd-4f58-a71e-5a46a1348e2e-signing-key\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.149538 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a70b40e2-8e35-4633-bc3c-2450a0df944c-metrics-tls\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.149705 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/a874c790-da94-4e03-a484-33c6f9126664-default-certificate\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.149832 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/86e29be3-d9b1-46cb-be88-5cdfab20b770-srv-cert\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.149975 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6db781b-d9b3-4c44-a364-820b8dded174-serving-cert\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.172429 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcxjr\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-kube-api-access-dcxjr\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.197282 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjv2w\" (UniqueName: \"kubernetes.io/projected/1b6ee71e-062d-49b9-b693-665355764e4f-kube-api-access-mjv2w\") pod \"console-f9d7485db-4xpkc\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.208573 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-fvh45"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.209977 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.217389 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-bound-sa-token\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.222127 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.228177 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.228387 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.728346184 +0000 UTC m=+143.420015223 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.228891 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.229258 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.729244716 +0000 UTC m=+143.420913755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.240539 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tztz8\" (UniqueName: \"kubernetes.io/projected/1005279f-a20b-43cd-957a-731252114f31-kube-api-access-tztz8\") pod \"route-controller-manager-6576b87f9c-dl2zh\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.244559 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.254504 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw7r4\" (UniqueName: \"kubernetes.io/projected/d6ee3265-356a-4eb2-afd0-c72976719909-kube-api-access-pw7r4\") pod \"dns-operator-744455d44c-9x6qs\" (UID: \"d6ee3265-356a-4eb2-afd0-c72976719909\") " pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.262408 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.280246 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1ef14221-89ef-4843-af97-142575e3284f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-k5wnv\" (UID: \"1ef14221-89ef-4843-af97-142575e3284f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.294102 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bw7n"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.295408 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/487c3d9a-c413-417d-ada8-cba0e38f0633-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mrqwb\" (UID: \"487c3d9a-c413-417d-ada8-cba0e38f0633\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.312452 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dff8f857-9981-4b16-93df-4200012ca322-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-lszz8\" (UID: \"dff8f857-9981-4b16-93df-4200012ca322\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.329586 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.329789 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.829755368 +0000 UTC m=+143.521424407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.330564 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.331146 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.831105431 +0000 UTC m=+143.522774550 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.334081 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.339243 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt95t\" (UniqueName: \"kubernetes.io/projected/a6db781b-d9b3-4c44-a364-820b8dded174-kube-api-access-dt95t\") pod \"service-ca-operator-777779d784-2ldx8\" (UID: \"a6db781b-d9b3-4c44-a364-820b8dded174\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: W0216 14:55:17.342363 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff3886f_9ddb_47c3_8a3b_3db0a8df51ae.slice/crio-fe889d88e498002833cdd03d3a7be2c070472db662ee52d68ac595f4dbaead1f WatchSource:0}: Error finding container fe889d88e498002833cdd03d3a7be2c070472db662ee52d68ac595f4dbaead1f: Status 404 returned error can't find the container with id fe889d88e498002833cdd03d3a7be2c070472db662ee52d68ac595f4dbaead1f Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.350908 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-fs2p7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.384612 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwxp7\" (UniqueName: \"kubernetes.io/projected/77fafd52-27d8-4936-87da-81bbf875738e-kube-api-access-dwxp7\") pod \"kube-storage-version-migrator-operator-b67b599dd-z9v5c\" (UID: \"77fafd52-27d8-4936-87da-81bbf875738e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.390205 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxnjn\" (UniqueName: \"kubernetes.io/projected/914ad338-1e06-47bd-8338-1ac0aeae7acd-kube-api-access-dxnjn\") pod \"ingress-canary-x4th6\" (UID: \"914ad338-1e06-47bd-8338-1ac0aeae7acd\") " pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.395556 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prxzm\" (UniqueName: \"kubernetes.io/projected/8ae571ba-1425-40a1-93f7-498609d03860-kube-api-access-prxzm\") pod \"multus-admission-controller-857f4d67dd-glrmv\" (UID: \"8ae571ba-1425-40a1-93f7-498609d03860\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.397153 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-z47nh"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.399544 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.399565 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnhj8"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.423576 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzdgp\" (UniqueName: \"kubernetes.io/projected/d4609715-d418-4f84-843a-b916f5e920ec-kube-api-access-lzdgp\") pod \"collect-profiles-29520885-6j92m\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.431926 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.434277 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.439258 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.939223882 +0000 UTC m=+143.630892921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.439662 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.440068 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:17.940053593 +0000 UTC m=+143.631722642 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.448659 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjnrd\" (UniqueName: \"kubernetes.io/projected/d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44-kube-api-access-kjnrd\") pod \"control-plane-machine-set-operator-78cbb6b69f-t29rq\" (UID: \"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.448932 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.453418 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-x4th6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.467189 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9btw\" (UniqueName: \"kubernetes.io/projected/a874c790-da94-4e03-a484-33c6f9126664-kube-api-access-x9btw\") pod \"router-default-5444994796-hqs5w\" (UID: \"a874c790-da94-4e03-a484-33c6f9126664\") " pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.483425 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krbm\" (UniqueName: \"kubernetes.io/projected/4090e7b0-fd01-4592-bf29-78649fde005e-kube-api-access-4krbm\") pod \"migrator-59844c95c7-gw9l8\" (UID: \"4090e7b0-fd01-4592-bf29-78649fde005e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.488293 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.495978 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqvp2\" (UniqueName: \"kubernetes.io/projected/86e29be3-d9b1-46cb-be88-5cdfab20b770-kube-api-access-cqvp2\") pod \"catalog-operator-68c6474976-dvsq6\" (UID: \"86e29be3-d9b1-46cb-be88-5cdfab20b770\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.502891 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.507093 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.511279 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.514182 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.514604 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pwkm\" (UniqueName: \"kubernetes.io/projected/c07fb7ce-2a3a-4f95-a70a-70655731e922-kube-api-access-2pwkm\") pod \"machine-config-server-6hfsw\" (UID: \"c07fb7ce-2a3a-4f95-a70a-70655731e922\") " pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.517098 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-clm9d"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.534008 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgzml\" (UniqueName: \"kubernetes.io/projected/84a8d805-58ad-4365-a405-7ab625c4c1a1-kube-api-access-vgzml\") pod \"csi-hostpathplugin-rn626\" (UID: \"84a8d805-58ad-4365-a405-7ab625c4c1a1\") " pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.535243 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.540390 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.540553 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.040510653 +0000 UTC m=+143.732179682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.540591 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.541066 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.041049717 +0000 UTC m=+143.732718756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.553876 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpxfn\" (UniqueName: \"kubernetes.io/projected/ab3f8bb5-da9d-4994-8822-9a1755622d96-kube-api-access-vpxfn\") pod \"package-server-manager-789f6589d5-j6s5s\" (UID: \"ab3f8bb5-da9d-4994-8822-9a1755622d96\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.586480 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.590463 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhscz\" (UniqueName: \"kubernetes.io/projected/0bd14f95-2eaf-4c9b-a140-dc7c240c1d70-kube-api-access-hhscz\") pod \"olm-operator-6b444d44fb-cmbq6\" (UID: \"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.598924 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-288kd\" (UniqueName: \"kubernetes.io/projected/82faed60-15dd-4f58-a71e-5a46a1348e2e-kube-api-access-288kd\") pod \"service-ca-9c57cc56f-4m6k7\" (UID: \"82faed60-15dd-4f58-a71e-5a46a1348e2e\") " pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.610199 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.622374 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxnsk\" (UniqueName: \"kubernetes.io/projected/34a812e6-7d17-4c40-a9ab-376ef6ab5001-kube-api-access-qxnsk\") pod \"marketplace-operator-79b997595-qm9b7\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.623880 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.632700 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.636511 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxkhs\" (UniqueName: \"kubernetes.io/projected/1eab95e3-e9ca-4793-9ccb-ea25223ec0a3-kube-api-access-mxkhs\") pod \"packageserver-d55dfcdfc-rksmj\" (UID: \"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.642620 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.642962 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.142946073 +0000 UTC m=+143.834615112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.646448 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.659223 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.667552 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.681514 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxjpv\" (UniqueName: \"kubernetes.io/projected/8cdab3c5-a3ee-4f2c-8dde-d0366c489344-kube-api-access-cxjpv\") pod \"machine-config-controller-84d6567774-qg7wg\" (UID: \"8cdab3c5-a3ee-4f2c-8dde-d0366c489344\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.687985 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.694095 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.710137 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.713156 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhwc8\" (UniqueName: \"kubernetes.io/projected/5613c863-4492-4b96-8045-c520e1a45ff1-kube-api-access-rhwc8\") pod \"machine-config-operator-74547568cd-8vnkq\" (UID: \"5613c863-4492-4b96-8045-c520e1a45ff1\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.723333 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.723490 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc5q5\" (UniqueName: \"kubernetes.io/projected/a70b40e2-8e35-4633-bc3c-2450a0df944c-kube-api-access-tc5q5\") pod \"dns-default-k2pmm\" (UID: \"a70b40e2-8e35-4633-bc3c-2450a0df944c\") " pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.744444 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.745132 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.245112636 +0000 UTC m=+143.936781675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.753643 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.773011 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6hfsw" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.779296 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.781685 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rn626" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.846128 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.846705 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.346687994 +0000 UTC m=+144.038357033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.875173 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" event={"ID":"7c228428-e7c4-4dc0-99d5-f90f0231ae29","Type":"ContainerStarted","Data":"9e56b56c7c6607e374f3f746ccca1c2b4f8bc3689fcafe2b042451183ca29c68"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.875211 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" event={"ID":"7c228428-e7c4-4dc0-99d5-f90f0231ae29","Type":"ContainerStarted","Data":"89735358f361a2c99164a08ab394f5c152bcb951bc5a30bbfbb490965ba22e25"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.875538 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.883308 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" event={"ID":"2f4e162c-8f1b-4951-b339-d9ac41932f4c","Type":"ContainerStarted","Data":"9ff1e0bae7b0ce330e4641b75c6268a882f9240fff4e423f9a29e4f1e381b0e6"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.883368 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" event={"ID":"2f4e162c-8f1b-4951-b339-d9ac41932f4c","Type":"ContainerStarted","Data":"ef45cbfe8698025141ae9e7c4a6dfb288cedcec74dee599f4f4870f15664f6a5"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.893347 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-x89l2"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.901051 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.911815 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" event={"ID":"62e84868-6940-4784-8892-ac255eacc315","Type":"ContainerStarted","Data":"eb4f302e1cd5eebc99151d8995e23e443936706a1e51e5cd8eb532d570c1cf85"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.919051 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4xpkc"] Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.923801 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" event={"ID":"ad452651-e143-4074-8f39-d3074bc487ca","Type":"ContainerStarted","Data":"86b96f8c25b4dafccc42f83506192f90336d7904467d346cfa5a4a0bdcdb3746"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.949762 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:17 crc kubenswrapper[4748]: E0216 14:55:17.950360 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.450336733 +0000 UTC m=+144.142005772 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.954096 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" event={"ID":"d066b2b8-53ef-4936-ade8-6290469be4ff","Type":"ContainerStarted","Data":"f65a7296afd25010f31e3f271d2bc954b0b48c319cc17ee6b6e98b3a94297f76"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.954157 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" event={"ID":"d066b2b8-53ef-4936-ade8-6290469be4ff","Type":"ContainerStarted","Data":"b65cb36af68e9717b6197bf9824b80019157515b7af47395dd1921eb3223f80c"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.966390 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-clm9d" event={"ID":"f889a3dd-c5ed-49f7-98b5-0cafd33399a4","Type":"ContainerStarted","Data":"f83bbba7fe289ca78cebb29b850da1bf1a9a76bcdb6e7148c0463343ef7d62c4"} Feb 16 14:55:17 crc kubenswrapper[4748]: I0216 14:55:17.976934 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:17 crc kubenswrapper[4748]: W0216 14:55:17.977571 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0491318b_bf9a_46a2_b262_8aaf5f3061f9.slice/crio-f162aff0ee2bfe9db1e1973747da59bc4b4492a79ad126a637366bfa9948a90d WatchSource:0}: Error finding container f162aff0ee2bfe9db1e1973747da59bc4b4492a79ad126a637366bfa9948a90d: Status 404 returned error can't find the container with id f162aff0ee2bfe9db1e1973747da59bc4b4492a79ad126a637366bfa9948a90d Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.003925 4748 generic.go:334] "Generic (PLEG): container finished" podID="e168487f-8cf1-45d6-9f48-7e15e92f7c22" containerID="ff2a4046c6c950cd143d7220d761c916704b45d610655411b06f9e2fcdde50be" exitCode=0 Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.004380 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" event={"ID":"e168487f-8cf1-45d6-9f48-7e15e92f7c22","Type":"ContainerDied","Data":"ff2a4046c6c950cd143d7220d761c916704b45d610655411b06f9e2fcdde50be"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.004454 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" event={"ID":"e168487f-8cf1-45d6-9f48-7e15e92f7c22","Type":"ContainerStarted","Data":"9af0a8063e420c7181226dc7d14da5cb95ae6c810992efa07e12399aa67c38ab"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.020574 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" event={"ID":"263bd0a0-5043-48f7-a185-30ef874fc6e7","Type":"ContainerStarted","Data":"76354a7bc8b1687587a8985fdac9655f8cb559bcaaa2c73cd28bfa3a9e1db71b"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.040016 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" event={"ID":"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906","Type":"ContainerStarted","Data":"2f4a7b1f341b47447710c289c89438c742f55c197d597a8db54b73ff2797ec83"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.040065 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" event={"ID":"75a73d6b-7fa7-483c-82f8-b2fa2d1a4906","Type":"ContainerStarted","Data":"9e2afb4386773b2ae732882c2e8b8fac9191cce495eecc8770235e8898adbb5e"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.043835 4748 generic.go:334] "Generic (PLEG): container finished" podID="b1c29b40-2c74-45be-bdf6-2d77ccc9c6da" containerID="6fbcadf9f86d9edce6335be3327fe67d005b592b28e3d7dab0a07d882e48ebe4" exitCode=0 Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.044312 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" event={"ID":"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da","Type":"ContainerDied","Data":"6fbcadf9f86d9edce6335be3327fe67d005b592b28e3d7dab0a07d882e48ebe4"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.044405 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" event={"ID":"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da","Type":"ContainerStarted","Data":"dc38759560d05900b295bcf4ac5886f19c44d3878d2777e2249a0a74282e66b3"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.047850 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" event={"ID":"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae","Type":"ContainerStarted","Data":"fe889d88e498002833cdd03d3a7be2c070472db662ee52d68ac595f4dbaead1f"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.059686 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.061307 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.561285425 +0000 UTC m=+144.252954464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.087551 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" event={"ID":"640e1720-ddc6-4148-8f90-a80375ca4187","Type":"ContainerStarted","Data":"3deef50bc0c36d59cc7d5897b1309479b0d2532b00d973a535ac8157855ce65b"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.092150 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" event={"ID":"b3886887-0bd5-4b76-ad85-cf0f065a997b","Type":"ContainerStarted","Data":"2947b958affcc776c50587f1943647e863dd49d9cf9b6d3ce0ba943532cc5a9f"} Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.154988 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.161732 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.165983 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.66596995 +0000 UTC m=+144.357638989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.173741 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-x4th6"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.195563 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.197862 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-fs2p7"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.215316 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-glrmv"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.267250 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.267451 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.767410205 +0000 UTC m=+144.459079244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.373178 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.374020 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.874007158 +0000 UTC m=+144.565676197 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.376124 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.382056 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.384049 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.429046 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-9x6qs"] Feb 16 14:55:18 crc kubenswrapper[4748]: W0216 14:55:18.445902 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77fafd52_27d8_4936_87da_81bbf875738e.slice/crio-5ea4d9dd88946cb5865226b962eb3269d469a7ba1e405e1babb92a5a6f16ff6a WatchSource:0}: Error finding container 5ea4d9dd88946cb5865226b962eb3269d469a7ba1e405e1babb92a5a6f16ff6a: Status 404 returned error can't find the container with id 5ea4d9dd88946cb5865226b962eb3269d469a7ba1e405e1babb92a5a6f16ff6a Feb 16 14:55:18 crc kubenswrapper[4748]: W0216 14:55:18.447969 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8611553d_e8b4_4f6e_a2c0_abb19409cd02.slice/crio-985810ce818e9f7833e45eb2de0b9b8d99b67729d5d5bb3f2ca90d3ce597dd10 WatchSource:0}: Error finding container 985810ce818e9f7833e45eb2de0b9b8d99b67729d5d5bb3f2ca90d3ce597dd10: Status 404 returned error can't find the container with id 985810ce818e9f7833e45eb2de0b9b8d99b67729d5d5bb3f2ca90d3ce597dd10 Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.452611 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.475402 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.477424 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:18.977402252 +0000 UTC m=+144.669071281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: W0216 14:55:18.486437 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6ee3265_356a_4eb2_afd0_c72976719909.slice/crio-3a48e77816ff09a312bd44bec18034eaabdcec942da7e1283d97790fc2bad2ec WatchSource:0}: Error finding container 3a48e77816ff09a312bd44bec18034eaabdcec942da7e1283d97790fc2bad2ec: Status 404 returned error can't find the container with id 3a48e77816ff09a312bd44bec18034eaabdcec942da7e1283d97790fc2bad2ec Feb 16 14:55:18 crc kubenswrapper[4748]: W0216 14:55:18.488462 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bd14f95_2eaf_4c9b_a140_dc7c240c1d70.slice/crio-85fd3a84a7fa69a9a6df823e848dd3bcf3ca4efda35ecec60528b4423b316625 WatchSource:0}: Error finding container 85fd3a84a7fa69a9a6df823e848dd3bcf3ca4efda35ecec60528b4423b316625: Status 404 returned error can't find the container with id 85fd3a84a7fa69a9a6df823e848dd3bcf3ca4efda35ecec60528b4423b316625 Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.570390 4748 csr.go:261] certificate signing request csr-wtrzj is approved, waiting to be issued Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.578499 4748 csr.go:257] certificate signing request csr-wtrzj is issued Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.579322 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.579903 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.079885791 +0000 UTC m=+144.771554830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.610037 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.695812 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.697002 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.196978286 +0000 UTC m=+144.888647325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.756362 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.799268 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.800065 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.300046261 +0000 UTC m=+144.991715300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.800975 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.839351 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qm9b7"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.881538 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg"] Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.908589 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:18 crc kubenswrapper[4748]: E0216 14:55:18.909369 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.409328641 +0000 UTC m=+145.100997680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:18 crc kubenswrapper[4748]: I0216 14:55:18.973116 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj"] Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.010146 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.010540 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.51052401 +0000 UTC m=+145.202193049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.024891 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s"] Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.078013 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-k2pmm"] Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.111098 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.111516 4748 generic.go:334] "Generic (PLEG): container finished" podID="640e1720-ddc6-4148-8f90-a80375ca4187" containerID="079fe20fd8a6500e87c3666f4eae584b81b425362c43669a888a20d4ab21e863" exitCode=0 Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.111609 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" event={"ID":"640e1720-ddc6-4148-8f90-a80375ca4187","Type":"ContainerDied","Data":"079fe20fd8a6500e87c3666f4eae584b81b425362c43669a888a20d4ab21e863"} Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.111623 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.611589175 +0000 UTC m=+145.303258214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.113690 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" event={"ID":"1ef14221-89ef-4843-af97-142575e3284f","Type":"ContainerStarted","Data":"b7cdf6762ac2834898d249b91a1c0cdc623018da80691edf5e17c7331e586c6c"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.118228 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" event={"ID":"1005279f-a20b-43cd-957a-731252114f31","Type":"ContainerStarted","Data":"6aa69f22407f191a4989278999547ba255c93a2d259bc462b8f9d407033efe60"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.118265 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" event={"ID":"1005279f-a20b-43cd-957a-731252114f31","Type":"ContainerStarted","Data":"9735d5237c92f1f7453e1ef3601f6bb0895ce6ee30cde3a4e69e89d639acda27"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.118998 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.120470 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" event={"ID":"ad452651-e143-4074-8f39-d3074bc487ca","Type":"ContainerStarted","Data":"fe4bde89ec3ec168200337a3f3739acd0fc094b3bf0c69bb4eab771be5eae8fc"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.120889 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.122912 4748 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dl2zh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.122958 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" podUID="1005279f-a20b-43cd-957a-731252114f31" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.123837 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fs2p7" event={"ID":"8611553d-e8b4-4f6e-a2c0-abb19409cd02","Type":"ContainerStarted","Data":"985810ce818e9f7833e45eb2de0b9b8d99b67729d5d5bb3f2ca90d3ce597dd10"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.125279 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" event={"ID":"62e84868-6940-4784-8892-ac255eacc315","Type":"ContainerStarted","Data":"85a1cf3c4b3e1c3ded3ad15302c1a8108ca611e4822fce5e545e4ac4c6abf83c"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.126495 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" event={"ID":"86e29be3-d9b1-46cb-be88-5cdfab20b770","Type":"ContainerStarted","Data":"6db6a879b5081dfad53705b89c664e6c82a4e79d95f6181977d63e5501682354"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.126950 4748 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hnhj8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.127034 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" podUID="ad452651-e143-4074-8f39-d3074bc487ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.134491 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rn626"] Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.134687 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6hfsw" event={"ID":"c07fb7ce-2a3a-4f95-a70a-70655731e922","Type":"ContainerStarted","Data":"b13c3c17ed37729915b90d436c33a660082422a310ba42f678cc4317246e95e3"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.136860 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" event={"ID":"77fafd52-27d8-4936-87da-81bbf875738e","Type":"ContainerStarted","Data":"5ea4d9dd88946cb5865226b962eb3269d469a7ba1e405e1babb92a5a6f16ff6a"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.139259 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jqhqn" podStartSLOduration=124.139245013 podStartE2EDuration="2m4.139245013s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.132023634 +0000 UTC m=+144.823692683" watchObservedRunningTime="2026-02-16 14:55:19.139245013 +0000 UTC m=+144.830914042" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.140079 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" event={"ID":"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44","Type":"ContainerStarted","Data":"7a5ab27a81cb08eb013ee81fac1b35bf8ea1a24d4d83a3ea2e8d5955c0596c6b"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.142538 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" event={"ID":"d6ee3265-356a-4eb2-afd0-c72976719909","Type":"ContainerStarted","Data":"3a48e77816ff09a312bd44bec18034eaabdcec942da7e1283d97790fc2bad2ec"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.152110 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq"] Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.153873 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8"] Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.154477 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6jhx4" podStartSLOduration=125.154462702 podStartE2EDuration="2m5.154462702s" podCreationTimestamp="2026-02-16 14:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.153488948 +0000 UTC m=+144.845157987" watchObservedRunningTime="2026-02-16 14:55:19.154462702 +0000 UTC m=+144.846131741" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.156640 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-clm9d" event={"ID":"f889a3dd-c5ed-49f7-98b5-0cafd33399a4","Type":"ContainerStarted","Data":"7d0aec68e943bb5849d7a7cc1932cdf19c0317b9ced0856c95186e069c5b019d"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.157654 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.161025 4748 patch_prober.go:28] interesting pod/console-operator-58897d9998-clm9d container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.161078 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-clm9d" podUID="f889a3dd-c5ed-49f7-98b5-0cafd33399a4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.13:8443/readyz\": dial tcp 10.217.0.13:8443: connect: connection refused" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.164731 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" event={"ID":"dff8f857-9981-4b16-93df-4200012ca322","Type":"ContainerStarted","Data":"150c872dd455284fa0a301ea21f2bd98a5f6434f4ee5948dd35a7394bbcc3f27"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.177696 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xpkc" event={"ID":"1b6ee71e-062d-49b9-b693-665355764e4f","Type":"ContainerStarted","Data":"21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.177752 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xpkc" event={"ID":"1b6ee71e-062d-49b9-b693-665355764e4f","Type":"ContainerStarted","Data":"f2a560d2178312939e52ecdbc4ab8370450b9f7a34562f5cc33ccd7997b27629"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.183429 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" event={"ID":"b3886887-0bd5-4b76-ad85-cf0f065a997b","Type":"ContainerStarted","Data":"4c01d4d37855e0860292394736a01a1e533ac860bb999aa276982c3a0dfcd8bf"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.206822 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" event={"ID":"487c3d9a-c413-417d-ada8-cba0e38f0633","Type":"ContainerStarted","Data":"187aa18aa4d34c40d13e5e1f5f9832c1f46543d6fbac1593421d983fc695e02b"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.206881 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" event={"ID":"487c3d9a-c413-417d-ada8-cba0e38f0633","Type":"ContainerStarted","Data":"dfa9484fcd6b0a4a3f47c2c480670410813ecc5b7cb79f069e1a8abc74863295"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.208069 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" event={"ID":"8cdab3c5-a3ee-4f2c-8dde-d0366c489344","Type":"ContainerStarted","Data":"492e0c28c76f1f103a144fa8861563c8930a43737f15faa44f1a5506eb6b5f34"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.214253 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-4m6k7"] Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.215476 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.218469 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" event={"ID":"a6db781b-d9b3-4c44-a364-820b8dded174","Type":"ContainerStarted","Data":"d3bd8d068a8fd25f6ae061a75d43908bc81eb0ffb1cc045d74491ecb70c75ebb"} Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.218808 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.718789693 +0000 UTC m=+145.410458732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.231943 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hqs5w" event={"ID":"a874c790-da94-4e03-a484-33c6f9126664","Type":"ContainerStarted","Data":"e4aa5ccde7b527c61b3e640b13f4154e86607e005639d22170901a6b814348eb"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.242480 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" event={"ID":"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae","Type":"ContainerStarted","Data":"79a60fd10a132d21ff531eb614f810cf3d7b335708d2cdcffb92046ed6742bc4"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.243369 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.244772 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" event={"ID":"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3","Type":"ContainerStarted","Data":"0e7648a67651f27a8e3bd767cab30e622cf1adff63f59cf20be5f6d7bd75683b"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.245419 4748 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-2bw7n container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" start-of-body= Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.245448 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" podUID="aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.9:6443/healthz\": dial tcp 10.217.0.9:6443: connect: connection refused" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.252951 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" event={"ID":"0491318b-bf9a-46a2-b262-8aaf5f3061f9","Type":"ContainerStarted","Data":"748f9ea3862ef3b9ddce4860bd391072245f1618cc574372a02b94533069cb06"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.252996 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" event={"ID":"0491318b-bf9a-46a2-b262-8aaf5f3061f9","Type":"ContainerStarted","Data":"f162aff0ee2bfe9db1e1973747da59bc4b4492a79ad126a637366bfa9948a90d"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.263851 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" event={"ID":"a695a4a3-bc60-4df5-8058-8986ba958074","Type":"ContainerStarted","Data":"6f5050e4692979e20fc4f0187c42676d759e2d340e1ae4701882148be6e4ab6b"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.270280 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" event={"ID":"263bd0a0-5043-48f7-a185-30ef874fc6e7","Type":"ContainerStarted","Data":"e538ea7a76682e62ab96ff3842115af791117a131c85eedbe9742245deba37e8"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.270325 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" event={"ID":"263bd0a0-5043-48f7-a185-30ef874fc6e7","Type":"ContainerStarted","Data":"eeb9a9e88f28d9e33f1b83175918016ea7cd25f309b90b821b153a6e0fc140c6"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.283230 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" event={"ID":"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70","Type":"ContainerStarted","Data":"85fd3a84a7fa69a9a6df823e848dd3bcf3ca4efda35ecec60528b4423b316625"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.289845 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" event={"ID":"d4609715-d418-4f84-843a-b916f5e920ec","Type":"ContainerStarted","Data":"c0c9a6635678a4c0ea8cd3b6cfc48bd62f2ab4a95776a2adf41ce316de9b3f7f"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.295936 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" event={"ID":"34a812e6-7d17-4c40-a9ab-376ef6ab5001","Type":"ContainerStarted","Data":"f6e133777689d2cbd0a01de8d1f51aae02d8ac82aaa3ae06d9ea36aa13e1fce4"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.300045 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" event={"ID":"8ae571ba-1425-40a1-93f7-498609d03860","Type":"ContainerStarted","Data":"6e565cc775f8a7ff48d0b9f044891150312e8f7d671ab50da49d09853b528674"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.311069 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x4th6" event={"ID":"914ad338-1e06-47bd-8338-1ac0aeae7acd","Type":"ContainerStarted","Data":"31cb3a902c4412bd32f7e0c4e4b73498c53015917836e1207ff00de16a736d8a"} Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.315776 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-42ljz" podStartSLOduration=124.315765007 podStartE2EDuration="2m4.315765007s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.313952962 +0000 UTC m=+145.005622001" watchObservedRunningTime="2026-02-16 14:55:19.315765007 +0000 UTC m=+145.007434046" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.316726 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.318440 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.818422833 +0000 UTC m=+145.510091872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.407830 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s48fz" podStartSLOduration=124.407797327 podStartE2EDuration="2m4.407797327s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.407445369 +0000 UTC m=+145.099114398" watchObservedRunningTime="2026-02-16 14:55:19.407797327 +0000 UTC m=+145.099466366" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.419336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.421423 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:19.921405556 +0000 UTC m=+145.613074605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.523541 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.524149 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.024126613 +0000 UTC m=+145.715795652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.524266 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.524592 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.024584904 +0000 UTC m=+145.716253943 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.582025 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-16 14:50:18 +0000 UTC, rotation deadline is 2026-11-28 08:28:46.513977076 +0000 UTC Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.582074 4748 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6833h33m26.931905121s for next certificate rotation Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.626023 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.627145 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.127119626 +0000 UTC m=+145.818788665 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.730173 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.730518 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.230504739 +0000 UTC m=+145.922173778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.834375 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.834902 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.334879427 +0000 UTC m=+146.026548466 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.892059 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4xpkc" podStartSLOduration=124.89204138 podStartE2EDuration="2m4.89204138s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.883877467 +0000 UTC m=+145.575546506" watchObservedRunningTime="2026-02-16 14:55:19.89204138 +0000 UTC m=+145.583710419" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.893669 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" podStartSLOduration=123.89366419 podStartE2EDuration="2m3.89366419s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.835337519 +0000 UTC m=+145.527006558" watchObservedRunningTime="2026-02-16 14:55:19.89366419 +0000 UTC m=+145.585333229" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.949568 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:19 crc kubenswrapper[4748]: E0216 14:55:19.950085 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.450072254 +0000 UTC m=+146.141741293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.994937 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-659gc" podStartSLOduration=124.99491662 podStartE2EDuration="2m4.99491662s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.985129647 +0000 UTC m=+145.676798716" watchObservedRunningTime="2026-02-16 14:55:19.99491662 +0000 UTC m=+145.686585659" Feb 16 14:55:19 crc kubenswrapper[4748]: I0216 14:55:19.996576 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mrqwb" podStartSLOduration=124.996570171 podStartE2EDuration="2m4.996570171s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:19.941245195 +0000 UTC m=+145.632914234" watchObservedRunningTime="2026-02-16 14:55:19.996570171 +0000 UTC m=+145.688239210" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.019457 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" podStartSLOduration=125.01942531 podStartE2EDuration="2m5.01942531s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.014247781 +0000 UTC m=+145.705916820" watchObservedRunningTime="2026-02-16 14:55:20.01942531 +0000 UTC m=+145.711094349" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.052372 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.053281 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.553257302 +0000 UTC m=+146.244926341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.103314 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-clm9d" podStartSLOduration=125.103291358 podStartE2EDuration="2m5.103291358s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.102093578 +0000 UTC m=+145.793762617" watchObservedRunningTime="2026-02-16 14:55:20.103291358 +0000 UTC m=+145.794960397" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.175389 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.175777 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.675764262 +0000 UTC m=+146.367433301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.279230 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.279995 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.779977955 +0000 UTC m=+146.471646994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.340883 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-z47nh" podStartSLOduration=125.340866781 podStartE2EDuration="2m5.340866781s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.340324197 +0000 UTC m=+146.031993236" watchObservedRunningTime="2026-02-16 14:55:20.340866781 +0000 UTC m=+146.032535820" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.341053 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" podStartSLOduration=125.341049185 podStartE2EDuration="2m5.341049185s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.265485605 +0000 UTC m=+145.957154674" watchObservedRunningTime="2026-02-16 14:55:20.341049185 +0000 UTC m=+146.032718224" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.377790 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" event={"ID":"a695a4a3-bc60-4df5-8058-8986ba958074","Type":"ContainerStarted","Data":"15a34332714b15b0466203b3847332755854a74dda1ac905fec575fba7803964"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.400834 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" event={"ID":"ab3f8bb5-da9d-4994-8822-9a1755622d96","Type":"ContainerStarted","Data":"00094d9e1ed0145549ca4f7944f7dfad8f23f3a17b7b40792e9770bcb387ac64"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.400874 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" event={"ID":"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da","Type":"ContainerStarted","Data":"973afc6ed8b66701afbc298a8e170878b88024865c1f1952f9590ee3f567351f"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.400887 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-x4th6" event={"ID":"914ad338-1e06-47bd-8338-1ac0aeae7acd","Type":"ContainerStarted","Data":"7d61e9a5bd02edbb70d8ab84a3e16ec436982e31e84307cb1e5b8b968b2ecf4c"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.400899 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" event={"ID":"d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44","Type":"ContainerStarted","Data":"8347cf02f3a7ce092178593dab2668d6ee822213d069fa31755e008224b11028"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.401911 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.402291 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:20.902279229 +0000 UTC m=+146.593948258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.451766 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" event={"ID":"0bd14f95-2eaf-4c9b-a140-dc7c240c1d70","Type":"ContainerStarted","Data":"16317c60a9652bdfb486c024d9ce850ecc616e8208c4ecd3f656e749f1be4d73"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.452546 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.454672 4748 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-cmbq6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.454705 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" podUID="0bd14f95-2eaf-4c9b-a140-dc7c240c1d70" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.479743 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rn626" event={"ID":"84a8d805-58ad-4365-a405-7ab625c4c1a1","Type":"ContainerStarted","Data":"28151ce6ec34a45bb2ad02e863f91cb76c46cabf053fb9443c1233b530acf1a4"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.489541 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" event={"ID":"5613c863-4492-4b96-8045-c520e1a45ff1","Type":"ContainerStarted","Data":"261b7b024ed14cfb8377d726fdffe70c734127fa0262d0701974c427de9da961"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.495305 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" event={"ID":"dff8f857-9981-4b16-93df-4200012ca322","Type":"ContainerStarted","Data":"f6bb2fef6727d6f08ba99929d8ee71eeb6b21aea6d2334b2870c4a66a3a1c11e"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.507624 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.508813 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.00879831 +0000 UTC m=+146.700467349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.519504 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-x89l2" podStartSLOduration=125.519483166 podStartE2EDuration="2m5.519483166s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.44128918 +0000 UTC m=+146.132958239" watchObservedRunningTime="2026-02-16 14:55:20.519483166 +0000 UTC m=+146.211152205" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.522322 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" event={"ID":"4090e7b0-fd01-4592-bf29-78649fde005e","Type":"ContainerStarted","Data":"9762fe1562104dd4b92b17605018e50387444d7fff3407191250030a5173bce7"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.546823 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-t29rq" podStartSLOduration=125.546802276 podStartE2EDuration="2m5.546802276s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.519045625 +0000 UTC m=+146.210714664" watchObservedRunningTime="2026-02-16 14:55:20.546802276 +0000 UTC m=+146.238471315" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.565680 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6hfsw" event={"ID":"c07fb7ce-2a3a-4f95-a70a-70655731e922","Type":"ContainerStarted","Data":"38222c2fdc30105f01c978d502f0938216d45fc7bbde7ff53b1f16b4256bf311"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.596677 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-lszz8" podStartSLOduration=125.596660917 podStartE2EDuration="2m5.596660917s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.594489983 +0000 UTC m=+146.286159022" watchObservedRunningTime="2026-02-16 14:55:20.596660917 +0000 UTC m=+146.288329956" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.597457 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-x4th6" podStartSLOduration=6.597452017 podStartE2EDuration="6.597452017s" podCreationTimestamp="2026-02-16 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.563386139 +0000 UTC m=+146.255055188" watchObservedRunningTime="2026-02-16 14:55:20.597452017 +0000 UTC m=+146.289121056" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.609433 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.615232 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.115204009 +0000 UTC m=+146.806873038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.637233 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-fs2p7" event={"ID":"8611553d-e8b4-4f6e-a2c0-abb19409cd02","Type":"ContainerStarted","Data":"b459ceffc3fd3c4773bc9e8afe64d0de6d216a036a8eab45c31080a8ab7cdb02"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.637879 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-fs2p7" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.656890 4748 patch_prober.go:28] interesting pod/downloads-7954f5f757-fs2p7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.657323 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fs2p7" podUID="8611553d-e8b4-4f6e-a2c0-abb19409cd02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.658081 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6hfsw" podStartSLOduration=6.658067786 podStartE2EDuration="6.658067786s" podCreationTimestamp="2026-02-16 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.65745175 +0000 UTC m=+146.349120789" watchObservedRunningTime="2026-02-16 14:55:20.658067786 +0000 UTC m=+146.349736825" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.658726 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" podStartSLOduration=124.658708291 podStartE2EDuration="2m4.658708291s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.625444694 +0000 UTC m=+146.317113733" watchObservedRunningTime="2026-02-16 14:55:20.658708291 +0000 UTC m=+146.350377330" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.685290 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k2pmm" event={"ID":"a70b40e2-8e35-4633-bc3c-2450a0df944c","Type":"ContainerStarted","Data":"aa1d27738ae6e7be3176e10880983acb39b4a643e25c49e05d3042f3336ea921"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.695031 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-fs2p7" podStartSLOduration=125.695017815 podStartE2EDuration="2m5.695017815s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.693468807 +0000 UTC m=+146.385137846" watchObservedRunningTime="2026-02-16 14:55:20.695017815 +0000 UTC m=+146.386686854" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.715898 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.717347 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.217333301 +0000 UTC m=+146.909002340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.743156 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-hqs5w" event={"ID":"a874c790-da94-4e03-a484-33c6f9126664","Type":"ContainerStarted","Data":"e3a6a411e66adb1d36f32e09a9270420bf4ea5235516fc163613daf0a2dd6746"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.805950 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" event={"ID":"d6ee3265-356a-4eb2-afd0-c72976719909","Type":"ContainerStarted","Data":"522bb87b56ec940a3597279e8961f5211b99fed725eb81c51ceee20ec52a2033"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.814882 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-hqs5w" podStartSLOduration=125.814867088 podStartE2EDuration="2m5.814867088s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.814350915 +0000 UTC m=+146.506019954" watchObservedRunningTime="2026-02-16 14:55:20.814867088 +0000 UTC m=+146.506536127" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.818171 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.819731 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.319702279 +0000 UTC m=+147.011371318 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.853618 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" event={"ID":"e168487f-8cf1-45d6-9f48-7e15e92f7c22","Type":"ContainerStarted","Data":"e1c16164993639a9718fbbf5d4bbd6bdd2301171e58db687f83cd32f8ea38103"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.908997 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" event={"ID":"82faed60-15dd-4f58-a71e-5a46a1348e2e","Type":"ContainerStarted","Data":"d7ae441e2ae699335b67f4e9f8978d9d2ba508a166b2382b4ed21c0588b205a0"} Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.930479 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:20 crc kubenswrapper[4748]: E0216 14:55:20.931512 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.431499431 +0000 UTC m=+147.123168460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.932326 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.932881 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.940369 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:55:20 crc kubenswrapper[4748]: I0216 14:55:20.980637 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" podStartSLOduration=124.980617284 podStartE2EDuration="2m4.980617284s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:20.90935678 +0000 UTC m=+146.601025819" watchObservedRunningTime="2026-02-16 14:55:20.980617284 +0000 UTC m=+146.672286313" Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.033461 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.035482 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.535469799 +0000 UTC m=+147.227138838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.139216 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.139546 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.639530209 +0000 UTC m=+147.331199248 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.145157 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-clm9d" Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.240998 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.241420 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.741409414 +0000 UTC m=+147.433078453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.343465 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.348293 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.848269694 +0000 UTC m=+147.539938733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.453381 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.453691 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:21.953679928 +0000 UTC m=+147.645348957 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.543651 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.543695 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.554418 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.554698 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.054684322 +0000 UTC m=+147.746353361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.588575 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.594905 4748 patch_prober.go:28] interesting pod/router-default-5444994796-hqs5w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:21 crc kubenswrapper[4748]: [-]has-synced failed: reason withheld Feb 16 14:55:21 crc kubenswrapper[4748]: [+]process-running ok Feb 16 14:55:21 crc kubenswrapper[4748]: healthz check failed Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.594955 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hqs5w" podUID="a874c790-da94-4e03-a484-33c6f9126664" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.656848 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.657420 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.157406538 +0000 UTC m=+147.849075577 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.760691 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.761052 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.261037028 +0000 UTC m=+147.952706067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.861941 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.862499 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.362488053 +0000 UTC m=+148.054157092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.948986 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" event={"ID":"4090e7b0-fd01-4592-bf29-78649fde005e","Type":"ContainerStarted","Data":"91db1c81c4bb28bb519a323397cd191b5a8d28e63a9e4d69d068857782cd2ba3"} Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.949036 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" event={"ID":"4090e7b0-fd01-4592-bf29-78649fde005e","Type":"ContainerStarted","Data":"43a285ce91fde4165862f6a9dacaaf8d31634f14c2118240678b750c88347fc2"} Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.962032 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" event={"ID":"1ef14221-89ef-4843-af97-142575e3284f","Type":"ContainerStarted","Data":"317f06f26b96b005fbbafc448b78408cceac363876ba05fd04ecf0bbba4c3d8d"} Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.963628 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:21 crc kubenswrapper[4748]: E0216 14:55:21.963966 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.463951988 +0000 UTC m=+148.155621027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.971218 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" event={"ID":"b1c29b40-2c74-45be-bdf6-2d77ccc9c6da","Type":"ContainerStarted","Data":"8fc291c534046ae7b845edd2fa4f2ed4f15d2a13f087e17a583c5546c41e7e3d"} Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.988813 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" event={"ID":"640e1720-ddc6-4148-8f90-a80375ca4187","Type":"ContainerStarted","Data":"b8d9666da79fc15c957598882581599cbfd4e533d5c0df1c2c9f17a81071bd08"} Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.988899 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.999822 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" event={"ID":"8cdab3c5-a3ee-4f2c-8dde-d0366c489344","Type":"ContainerStarted","Data":"af694219e3060d336d38fcb904ad7c1e2ed867f9f312e8a0ded2600bddc709d0"} Feb 16 14:55:21 crc kubenswrapper[4748]: I0216 14:55:21.999875 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" event={"ID":"8cdab3c5-a3ee-4f2c-8dde-d0366c489344","Type":"ContainerStarted","Data":"2e7fecc782d64597be7c1faed6ac5608d972a9b7426b4da1a702749845895b33"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.020052 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" event={"ID":"86e29be3-d9b1-46cb-be88-5cdfab20b770","Type":"ContainerStarted","Data":"1fa0f0f3e1527e868cb7bec86865c2b3ac459ac92879a3072bd59e19092aba80"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.020555 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.020852 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.022640 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-k5wnv" podStartSLOduration=127.022624898 podStartE2EDuration="2m7.022624898s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.020730561 +0000 UTC m=+147.712399600" watchObservedRunningTime="2026-02-16 14:55:22.022624898 +0000 UTC m=+147.714293927" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.022959 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-gw9l8" podStartSLOduration=127.022954746 podStartE2EDuration="2m7.022954746s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:21.981939816 +0000 UTC m=+147.673608855" watchObservedRunningTime="2026-02-16 14:55:22.022954746 +0000 UTC m=+147.714623785" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.029810 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" event={"ID":"ab3f8bb5-da9d-4994-8822-9a1755622d96","Type":"ContainerStarted","Data":"438b58366a3eec2ffca50522cb56cf26298a89020a0e256da6e348252a20ab96"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.029851 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" event={"ID":"ab3f8bb5-da9d-4994-8822-9a1755622d96","Type":"ContainerStarted","Data":"b579b6cbddb9c95e0e6105858e5d249e4d8a02e6127a4c8df8c60066e3bf1d4a"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.031169 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.036926 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.046489 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" event={"ID":"82faed60-15dd-4f58-a71e-5a46a1348e2e","Type":"ContainerStarted","Data":"1d497f51c1a7e15cef6c06eaa524b20368d469297d6c17d7f20bde05d7bc169f"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.065113 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" podStartSLOduration=127.065096615 podStartE2EDuration="2m7.065096615s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.063636119 +0000 UTC m=+147.755305168" watchObservedRunningTime="2026-02-16 14:55:22.065096615 +0000 UTC m=+147.756765654" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.065467 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.068500 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.568469029 +0000 UTC m=+148.260138078 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.071044 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" event={"ID":"a6db781b-d9b3-4c44-a364-820b8dded174","Type":"ContainerStarted","Data":"f47b3217bc3ef49d6a690b99af50b6a8f4854c35ad0fd8a33ecea57aae09b6db"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.089120 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-dvsq6" podStartSLOduration=126.089106012 podStartE2EDuration="2m6.089106012s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.088482486 +0000 UTC m=+147.780151535" watchObservedRunningTime="2026-02-16 14:55:22.089106012 +0000 UTC m=+147.780775051" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.098536 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" event={"ID":"77fafd52-27d8-4936-87da-81bbf875738e","Type":"ContainerStarted","Data":"445f8ee3d246d472ad08416a1a60492a37798b1487d5d6a546a03bb07b5272d4"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.136091 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k2pmm" event={"ID":"a70b40e2-8e35-4633-bc3c-2450a0df944c","Type":"ContainerStarted","Data":"580da4daa2dc8805256602d28f9cf5fca28cce93054f18498261cf183fbcaf51"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.136702 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.153080 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-4m6k7" podStartSLOduration=126.153065824 podStartE2EDuration="2m6.153065824s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.114681218 +0000 UTC m=+147.806350257" watchObservedRunningTime="2026-02-16 14:55:22.153065824 +0000 UTC m=+147.844734863" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.153423 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" podStartSLOduration=127.153419913 podStartE2EDuration="2m7.153419913s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.151121815 +0000 UTC m=+147.842790854" watchObservedRunningTime="2026-02-16 14:55:22.153419913 +0000 UTC m=+147.845088952" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.171249 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.172384 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.672367734 +0000 UTC m=+148.364036773 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.173384 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rn626" event={"ID":"84a8d805-58ad-4365-a405-7ab625c4c1a1","Type":"ContainerStarted","Data":"d0f93b9ffe3f8471975d5a0503234a74a3876987079a21f3a7486717ce99dcc0"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.206687 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" event={"ID":"5613c863-4492-4b96-8045-c520e1a45ff1","Type":"ContainerStarted","Data":"58693d33aef2447b558a6c9fdc1e174fa99dff89606b45834f2fe9f2f33f4b8e"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.206743 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" event={"ID":"5613c863-4492-4b96-8045-c520e1a45ff1","Type":"ContainerStarted","Data":"743c1cafc735954298a0228e888f80114bebbb776188f4cab3ff9c93e67f0a35"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.228739 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-qg7wg" podStartSLOduration=127.228708566 podStartE2EDuration="2m7.228708566s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.227167948 +0000 UTC m=+147.918836987" watchObservedRunningTime="2026-02-16 14:55:22.228708566 +0000 UTC m=+147.920377605" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.239031 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" event={"ID":"62e84868-6940-4784-8892-ac255eacc315","Type":"ContainerStarted","Data":"7a6b527cb818dd6d47957531d22a7ce6c86b73d1f81c20439c824c353fc28d53"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.271269 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" event={"ID":"1eab95e3-e9ca-4793-9ccb-ea25223ec0a3","Type":"ContainerStarted","Data":"c7ffd41b56f1ce8842451ff924b8c97e8b53c2e829668088fc650f4cd7eb8a30"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.272464 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.276518 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.277573 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.777562252 +0000 UTC m=+148.469231291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.285639 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" podStartSLOduration=126.285621343 podStartE2EDuration="2m6.285621343s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.28348879 +0000 UTC m=+147.975157829" watchObservedRunningTime="2026-02-16 14:55:22.285621343 +0000 UTC m=+147.977290382" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.287918 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" event={"ID":"d4609715-d418-4f84-843a-b916f5e920ec","Type":"ContainerStarted","Data":"85d6648dd409023222d97b7ee859c9058b3359b0e4bfe60ae3363dea3fe6d032"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.319569 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" event={"ID":"d6ee3265-356a-4eb2-afd0-c72976719909","Type":"ContainerStarted","Data":"26bbe3db0ce106835040d08675ba6f7a88c53534e7ee70b67424f993d49b8e70"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.346053 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-bf462" podStartSLOduration=127.346038837 podStartE2EDuration="2m7.346038837s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.319467455 +0000 UTC m=+148.011136504" watchObservedRunningTime="2026-02-16 14:55:22.346038837 +0000 UTC m=+148.037707876" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.346203 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-k2pmm" podStartSLOduration=8.346199461 podStartE2EDuration="8.346199461s" podCreationTimestamp="2026-02-16 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.345743459 +0000 UTC m=+148.037412508" watchObservedRunningTime="2026-02-16 14:55:22.346199461 +0000 UTC m=+148.037868500" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.355498 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" event={"ID":"0491318b-bf9a-46a2-b262-8aaf5f3061f9","Type":"ContainerStarted","Data":"e39a471e8dba9941972236890c8e55adc0d3a7415bb7237814aa1e538b51991b"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.381154 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.382266 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.882248378 +0000 UTC m=+148.573917417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.392902 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" event={"ID":"34a812e6-7d17-4c40-a9ab-376ef6ab5001","Type":"ContainerStarted","Data":"29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.394261 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.400007 4748 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qm9b7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.400066 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.402041 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-2ldx8" podStartSLOduration=126.40201213 podStartE2EDuration="2m6.40201213s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.371229034 +0000 UTC m=+148.062898073" watchObservedRunningTime="2026-02-16 14:55:22.40201213 +0000 UTC m=+148.093681159" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.403526 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-z9v5c" podStartSLOduration=127.403518907 podStartE2EDuration="2m7.403518907s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.400581484 +0000 UTC m=+148.092250523" watchObservedRunningTime="2026-02-16 14:55:22.403518907 +0000 UTC m=+148.095187946" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.448993 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8vnkq" podStartSLOduration=127.448977549 podStartE2EDuration="2m7.448977549s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.447160164 +0000 UTC m=+148.138829203" watchObservedRunningTime="2026-02-16 14:55:22.448977549 +0000 UTC m=+148.140646588" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.452228 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" event={"ID":"8ae571ba-1425-40a1-93f7-498609d03860","Type":"ContainerStarted","Data":"6c61924895972a3c10f0e75d315115629341a6d1173a8853164c6b2ad2462d0e"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.452275 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" event={"ID":"8ae571ba-1425-40a1-93f7-498609d03860","Type":"ContainerStarted","Data":"f5e547835305abb589892d955a6715476bddeaef1dc319ff0f2429770807503d"} Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.487602 4748 patch_prober.go:28] interesting pod/downloads-7954f5f757-fs2p7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.487876 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fs2p7" podUID="8611553d-e8b4-4f6e-a2c0-abb19409cd02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.490360 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.492759 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:22.992744248 +0000 UTC m=+148.684413287 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.503035 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-6wx2n" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.515569 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" podStartSLOduration=126.515554096 podStartE2EDuration="2m6.515554096s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.514562241 +0000 UTC m=+148.206231290" watchObservedRunningTime="2026-02-16 14:55:22.515554096 +0000 UTC m=+148.207223135" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.557097 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-cmbq6" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.561013 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" podStartSLOduration=126.560988897 podStartE2EDuration="2m6.560988897s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.554459674 +0000 UTC m=+148.246128713" watchObservedRunningTime="2026-02-16 14:55:22.560988897 +0000 UTC m=+148.252657936" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.595285 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.596529 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.096501161 +0000 UTC m=+148.788170200 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.623941 4748 patch_prober.go:28] interesting pod/router-default-5444994796-hqs5w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:22 crc kubenswrapper[4748]: [-]has-synced failed: reason withheld Feb 16 14:55:22 crc kubenswrapper[4748]: [+]process-running ok Feb 16 14:55:22 crc kubenswrapper[4748]: healthz check failed Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.624008 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hqs5w" podUID="a874c790-da94-4e03-a484-33c6f9126664" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.641469 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" podStartSLOduration=127.641447549 podStartE2EDuration="2m7.641447549s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.596448379 +0000 UTC m=+148.288117418" watchObservedRunningTime="2026-02-16 14:55:22.641447549 +0000 UTC m=+148.333116588" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.699477 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.699791 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.199776981 +0000 UTC m=+148.891446020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.699873 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-glrmv" podStartSLOduration=126.699784911 podStartE2EDuration="2m6.699784911s" podCreationTimestamp="2026-02-16 14:53:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.696397847 +0000 UTC m=+148.388066886" watchObservedRunningTime="2026-02-16 14:55:22.699784911 +0000 UTC m=+148.391453950" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.700439 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-9x6qs" podStartSLOduration=127.700433027 podStartE2EDuration="2m7.700433027s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.643138591 +0000 UTC m=+148.334807620" watchObservedRunningTime="2026-02-16 14:55:22.700433027 +0000 UTC m=+148.392102066" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.776377 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-gbrhq" podStartSLOduration=127.776358387 podStartE2EDuration="2m7.776358387s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:22.741039118 +0000 UTC m=+148.432708147" watchObservedRunningTime="2026-02-16 14:55:22.776358387 +0000 UTC m=+148.468027416" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.802222 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.802793 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.802952 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.803080 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.803420 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.30340206 +0000 UTC m=+148.995071099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.807942 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.842751 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.856485 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.904185 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.904286 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:22 crc kubenswrapper[4748]: E0216 14:55:22.904595 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.404573038 +0000 UTC m=+149.096242077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.925229 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:22 crc kubenswrapper[4748]: I0216 14:55:22.949885 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rksmj" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.010613 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.011058 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.511042188 +0000 UTC m=+149.202711227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.039226 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.053952 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.071803 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.113175 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.113519 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.613507858 +0000 UTC m=+149.305176887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.217134 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.217449 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.717434585 +0000 UTC m=+149.409103624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.318661 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.319081 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.819064464 +0000 UTC m=+149.510733503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.422111 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.422816 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:23.922801636 +0000 UTC m=+149.614470675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.526485 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.526556 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-k2pmm" event={"ID":"a70b40e2-8e35-4633-bc3c-2450a0df944c","Type":"ContainerStarted","Data":"cc6a882bf725ad2215f8aab767f7b6b8fe9546cdf79cf82cd02fd7a9d097228d"} Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.526918 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.026903427 +0000 UTC m=+149.718572466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.558869 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2hlms"] Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.559798 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.564166 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.588157 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rn626" event={"ID":"84a8d805-58ad-4365-a405-7ab625c4c1a1","Type":"ContainerStarted","Data":"7c9e8bcce63dde62c936d8cba06d6563f8b824ccff81916bf5bfcf1a3d918e50"} Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.588254 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hlms"] Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.592419 4748 patch_prober.go:28] interesting pod/downloads-7954f5f757-fs2p7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.592474 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fs2p7" podUID="8611553d-e8b4-4f6e-a2c0-abb19409cd02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.592789 4748 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-qm9b7 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.592835 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.628685 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.628949 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-utilities\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.628979 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-catalog-content\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.629024 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4sq2\" (UniqueName: \"kubernetes.io/projected/3807d8da-105e-4dbb-8446-81b6d4a2ae05-kube-api-access-z4sq2\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.629159 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.129142462 +0000 UTC m=+149.820811501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.730782 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-utilities\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.732176 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-catalog-content\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.732112 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-utilities\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.732470 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-catalog-content\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.732753 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.732902 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.232889864 +0000 UTC m=+149.924558903 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.733305 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4sq2\" (UniqueName: \"kubernetes.io/projected/3807d8da-105e-4dbb-8446-81b6d4a2ae05-kube-api-access-z4sq2\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.754188 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s9vdw"] Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.755380 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.757726 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.773230 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9vdw"] Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.785473 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4sq2\" (UniqueName: \"kubernetes.io/projected/3807d8da-105e-4dbb-8446-81b6d4a2ae05-kube-api-access-z4sq2\") pod \"community-operators-2hlms\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.823908 4748 patch_prober.go:28] interesting pod/router-default-5444994796-hqs5w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:23 crc kubenswrapper[4748]: [-]has-synced failed: reason withheld Feb 16 14:55:23 crc kubenswrapper[4748]: [+]process-running ok Feb 16 14:55:23 crc kubenswrapper[4748]: healthz check failed Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.823958 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hqs5w" podUID="a874c790-da94-4e03-a484-33c6f9126664" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.835656 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.836648 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc5js\" (UniqueName: \"kubernetes.io/projected/712cdaea-3348-4e68-8761-842d520bedc6-kube-api-access-rc5js\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.836796 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-catalog-content\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.836926 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-utilities\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.838138 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.338111433 +0000 UTC m=+150.029780472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.945890 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.946254 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc5js\" (UniqueName: \"kubernetes.io/projected/712cdaea-3348-4e68-8761-842d520bedc6-kube-api-access-rc5js\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.946428 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-catalog-content\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.946612 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-utilities\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: E0216 14:55:23.948161 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.448138552 +0000 UTC m=+150.139807591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.948636 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-catalog-content\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.949367 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-utilities\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.984631 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-pfvsx" Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.985739 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-w2jmq"] Feb 16 14:55:23 crc kubenswrapper[4748]: I0216 14:55:23.986967 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.000300 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2jmq"] Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.023858 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc5js\" (UniqueName: \"kubernetes.io/projected/712cdaea-3348-4e68-8761-842d520bedc6-kube-api-access-rc5js\") pod \"certified-operators-s9vdw\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.044562 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.048034 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.048377 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-utilities\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.048470 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bc2\" (UniqueName: \"kubernetes.io/projected/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-kube-api-access-k7bc2\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.048520 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-catalog-content\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.048657 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.548637873 +0000 UTC m=+150.240306912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.066267 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.152241 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7bc2\" (UniqueName: \"kubernetes.io/projected/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-kube-api-access-k7bc2\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.152305 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-catalog-content\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.152328 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.152363 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-utilities\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.153920 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-catalog-content\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.154162 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.654151349 +0000 UTC m=+150.345820388 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.154596 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-utilities\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.164146 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-crk29"] Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.165360 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.185914 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crk29"] Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.217512 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7bc2\" (UniqueName: \"kubernetes.io/projected/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-kube-api-access-k7bc2\") pod \"community-operators-w2jmq\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.255170 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.255422 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-utilities\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.255458 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-catalog-content\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.255506 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkpk\" (UniqueName: \"kubernetes.io/projected/9ca47343-f274-4f11-84ed-6fc055cf41f9-kube-api-access-4wkpk\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.255625 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.755610334 +0000 UTC m=+150.447279373 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.373705 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.374257 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkpk\" (UniqueName: \"kubernetes.io/projected/9ca47343-f274-4f11-84ed-6fc055cf41f9-kube-api-access-4wkpk\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.374368 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-utilities\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.374421 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-catalog-content\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.375050 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.875036347 +0000 UTC m=+150.566705386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.375057 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-catalog-content\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.375295 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-utilities\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.411951 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkpk\" (UniqueName: \"kubernetes.io/projected/9ca47343-f274-4f11-84ed-6fc055cf41f9-kube-api-access-4wkpk\") pod \"certified-operators-crk29\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.435460 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.477352 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.477543 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.977515617 +0000 UTC m=+150.669184656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.477758 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.478068 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:24.978056931 +0000 UTC m=+150.669725970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.521089 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.580902 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.581247 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:25.081233659 +0000 UTC m=+150.772902698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.608351 4748 patch_prober.go:28] interesting pod/router-default-5444994796-hqs5w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:24 crc kubenswrapper[4748]: [-]has-synced failed: reason withheld Feb 16 14:55:24 crc kubenswrapper[4748]: [+]process-running ok Feb 16 14:55:24 crc kubenswrapper[4748]: healthz check failed Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.608399 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hqs5w" podUID="a874c790-da94-4e03-a484-33c6f9126664" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.626006 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"94133e34d1fe1020232987ebfd22078287c06a0bc11f48dee097bf8006a492d8"} Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.639989 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"111cea7e4b69e1c29a76e0f819402284864f6135773b102770e69c6a23216186"} Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.640027 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"06f4f58bd2032cb64dd998186485708253a6116ce141ca8f6f8ac69972e95f79"} Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.667837 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4a7cc91a5fb99b169407c358143910599ae98b7e817e0318f072a18b72a31f45"} Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.684292 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.685395 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:25.185383731 +0000 UTC m=+150.877052770 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.695553 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rn626" event={"ID":"84a8d805-58ad-4365-a405-7ab625c4c1a1","Type":"ContainerStarted","Data":"56651b704e3f6b25ec066610c85e0825d8deca6e329b212481844f9a46e773bd"} Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.714640 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.730294 4748 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.785416 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.785514 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:25.285499513 +0000 UTC m=+150.977168552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.785750 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.788212 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:25.28820145 +0000 UTC m=+150.979870489 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.851506 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2hlms"] Feb 16 14:55:24 crc kubenswrapper[4748]: W0216 14:55:24.864786 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3807d8da_105e_4dbb_8446_81b6d4a2ae05.slice/crio-c7ec1f6149fbd95886f298acec833da143ab202ffb558bd634030fb120e921a9 WatchSource:0}: Error finding container c7ec1f6149fbd95886f298acec833da143ab202ffb558bd634030fb120e921a9: Status 404 returned error can't find the container with id c7ec1f6149fbd95886f298acec833da143ab202ffb558bd634030fb120e921a9 Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.887248 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:24 crc kubenswrapper[4748]: E0216 14:55:24.888193 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:25.388176108 +0000 UTC m=+151.079845147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:24 crc kubenswrapper[4748]: I0216 14:55:24.953029 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s9vdw"] Feb 16 14:55:24 crc kubenswrapper[4748]: W0216 14:55:24.957833 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod712cdaea_3348_4e68_8761_842d520bedc6.slice/crio-52348d50cb3eb3c29f641d8999e75d21021525b7a47cd13cbce22d439ac34d0f WatchSource:0}: Error finding container 52348d50cb3eb3c29f641d8999e75d21021525b7a47cd13cbce22d439ac34d0f: Status 404 returned error can't find the container with id 52348d50cb3eb3c29f641d8999e75d21021525b7a47cd13cbce22d439ac34d0f Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.007741 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:25 crc kubenswrapper[4748]: E0216 14:55:25.008291 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-16 14:55:25.508276898 +0000 UTC m=+151.199945927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-26scw" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.053279 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-w2jmq"] Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.110241 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:25 crc kubenswrapper[4748]: E0216 14:55:25.110524 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-16 14:55:25.610507652 +0000 UTC m=+151.302176691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.174567 4748 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T14:55:24.730321709Z","Handler":null,"Name":""} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.180099 4748 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.180134 4748 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.211659 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.214567 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.214688 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.236380 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-26scw\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.312269 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-crk29"] Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.312727 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.324201 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 14:55:25 crc kubenswrapper[4748]: W0216 14:55:25.336584 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ca47343_f274_4f11_84ed_6fc055cf41f9.slice/crio-46c19eb6dd1084940d74639505e2b39dc763bd35136f22e528bb190dcee932e8 WatchSource:0}: Error finding container 46c19eb6dd1084940d74639505e2b39dc763bd35136f22e528bb190dcee932e8: Status 404 returned error can't find the container with id 46c19eb6dd1084940d74639505e2b39dc763bd35136f22e528bb190dcee932e8 Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.353204 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.566159 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-26scw"] Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.590376 4748 patch_prober.go:28] interesting pod/router-default-5444994796-hqs5w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:25 crc kubenswrapper[4748]: [-]has-synced failed: reason withheld Feb 16 14:55:25 crc kubenswrapper[4748]: [+]process-running ok Feb 16 14:55:25 crc kubenswrapper[4748]: healthz check failed Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.590659 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hqs5w" podUID="a874c790-da94-4e03-a484-33c6f9126664" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.701584 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" event={"ID":"a480239a-6f26-4189-9a3c-17896449a6e3","Type":"ContainerStarted","Data":"ca0d7f83c8b659fab9fc0c4194ed2a2cb4bc6ec2ac1b6a948c43689bd757843d"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.703796 4748 generic.go:334] "Generic (PLEG): container finished" podID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerID="4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6" exitCode=0 Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.703900 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2jmq" event={"ID":"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a","Type":"ContainerDied","Data":"4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.703942 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2jmq" event={"ID":"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a","Type":"ContainerStarted","Data":"e4fc8fab1b3347724fb94ea27452db0ccaf793c99727cad3141faf458ee6e7cd"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.707355 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.714436 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rn626" event={"ID":"84a8d805-58ad-4365-a405-7ab625c4c1a1","Type":"ContainerStarted","Data":"c7a6a04dfc533b242406a20614c4d58a9fb7608b2760dfd5c0b8f043dd0cbff1"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.717584 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"d0a3a45597993d07bad7cc568dd0498c19fad35ff52ebd0f21ba79cfff534d4e"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.719409 4748 generic.go:334] "Generic (PLEG): container finished" podID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerID="00a19a50a1d00cf047677bb93896ab73e7c32e69675cb6d1bc896c572216137a" exitCode=0 Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.719592 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk29" event={"ID":"9ca47343-f274-4f11-84ed-6fc055cf41f9","Type":"ContainerDied","Data":"00a19a50a1d00cf047677bb93896ab73e7c32e69675cb6d1bc896c572216137a"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.719766 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk29" event={"ID":"9ca47343-f274-4f11-84ed-6fc055cf41f9","Type":"ContainerStarted","Data":"46c19eb6dd1084940d74639505e2b39dc763bd35136f22e528bb190dcee932e8"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.725792 4748 generic.go:334] "Generic (PLEG): container finished" podID="712cdaea-3348-4e68-8761-842d520bedc6" containerID="be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837" exitCode=0 Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.725893 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9vdw" event={"ID":"712cdaea-3348-4e68-8761-842d520bedc6","Type":"ContainerDied","Data":"be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.725927 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9vdw" event={"ID":"712cdaea-3348-4e68-8761-842d520bedc6","Type":"ContainerStarted","Data":"52348d50cb3eb3c29f641d8999e75d21021525b7a47cd13cbce22d439ac34d0f"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.735074 4748 generic.go:334] "Generic (PLEG): container finished" podID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerID="5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d" exitCode=0 Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.736068 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hlms" event={"ID":"3807d8da-105e-4dbb-8446-81b6d4a2ae05","Type":"ContainerDied","Data":"5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.736689 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hlms" event={"ID":"3807d8da-105e-4dbb-8446-81b6d4a2ae05","Type":"ContainerStarted","Data":"c7ec1f6149fbd95886f298acec833da143ab202ffb558bd634030fb120e921a9"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.743791 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"1af07d18b2d4500e50af54c2375befaa8a54db761935da20e18a7270ad35ef37"} Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.743831 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.752494 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p2jwt"] Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.759520 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.767463 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2jwt"] Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.775544 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.777439 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rn626" podStartSLOduration=11.77742265 podStartE2EDuration="11.77742265s" podCreationTimestamp="2026-02-16 14:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:25.775636815 +0000 UTC m=+151.467305854" watchObservedRunningTime="2026-02-16 14:55:25.77742265 +0000 UTC m=+151.469091689" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.821386 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-catalog-content\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.821501 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-utilities\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.821567 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7xt7\" (UniqueName: \"kubernetes.io/projected/a62cdd4f-90c2-45c7-893d-35356124bf3c-kube-api-access-n7xt7\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.922870 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-catalog-content\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.922931 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-utilities\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.922972 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7xt7\" (UniqueName: \"kubernetes.io/projected/a62cdd4f-90c2-45c7-893d-35356124bf3c-kube-api-access-n7xt7\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.923693 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-catalog-content\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.923956 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-utilities\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:25 crc kubenswrapper[4748]: I0216 14:55:25.950481 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7xt7\" (UniqueName: \"kubernetes.io/projected/a62cdd4f-90c2-45c7-893d-35356124bf3c-kube-api-access-n7xt7\") pod \"redhat-marketplace-p2jwt\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.086344 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.150586 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdw4"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.151631 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.173119 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdw4"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.227482 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr797\" (UniqueName: \"kubernetes.io/projected/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-kube-api-access-hr797\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.227636 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-utilities\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.227668 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-catalog-content\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.328822 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-utilities\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.328876 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-catalog-content\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.328914 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hr797\" (UniqueName: \"kubernetes.io/projected/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-kube-api-access-hr797\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.330466 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-catalog-content\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.330560 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-utilities\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.361743 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr797\" (UniqueName: \"kubernetes.io/projected/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-kube-api-access-hr797\") pod \"redhat-marketplace-9jdw4\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.364073 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.364693 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.369142 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.370316 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.370664 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.430603 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.430655 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.478401 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.531929 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.532336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.532062 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.561096 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.590694 4748 patch_prober.go:28] interesting pod/router-default-5444994796-hqs5w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:26 crc kubenswrapper[4748]: [-]has-synced failed: reason withheld Feb 16 14:55:26 crc kubenswrapper[4748]: [+]process-running ok Feb 16 14:55:26 crc kubenswrapper[4748]: healthz check failed Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.590756 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hqs5w" podUID="a874c790-da94-4e03-a484-33c6f9126664" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.657689 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.657740 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.666095 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2jwt"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.671589 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.734693 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.741339 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdw4"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.750274 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nmd5x"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.751423 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.759283 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.778213 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2jwt" event={"ID":"a62cdd4f-90c2-45c7-893d-35356124bf3c","Type":"ContainerStarted","Data":"fe757ed1c421fef133e961f7dba6b9733d8132ba76bed2cfb03eddc59b5a88da"} Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.805304 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" event={"ID":"a480239a-6f26-4189-9a3c-17896449a6e3","Type":"ContainerStarted","Data":"216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1"} Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.813979 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmd5x"] Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.824159 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-fvh45" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.830372 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" podStartSLOduration=131.830347486 podStartE2EDuration="2m11.830347486s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:26.828652944 +0000 UTC m=+152.520321983" watchObservedRunningTime="2026-02-16 14:55:26.830347486 +0000 UTC m=+152.522016525" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.839691 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-utilities\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.840135 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrg62\" (UniqueName: \"kubernetes.io/projected/883993d1-6837-4aef-952c-720e2901efb5-kube-api-access-nrg62\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.840184 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-catalog-content\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.941603 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrg62\" (UniqueName: \"kubernetes.io/projected/883993d1-6837-4aef-952c-720e2901efb5-kube-api-access-nrg62\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.942188 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-catalog-content\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.942258 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-utilities\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.944702 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-catalog-content\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.950152 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-utilities\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:26 crc kubenswrapper[4748]: I0216 14:55:26.992013 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrg62\" (UniqueName: \"kubernetes.io/projected/883993d1-6837-4aef-952c-720e2901efb5-kube-api-access-nrg62\") pod \"redhat-operators-nmd5x\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.020971 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.092030 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.154858 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rpxjm"] Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.156402 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.211156 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rpxjm"] Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.247853 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-utilities\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.247905 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkr6h\" (UniqueName: \"kubernetes.io/projected/690e9611-61fc-40f8-b3a6-706d8b6218f0-kube-api-access-zkr6h\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.248012 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-catalog-content\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.265190 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.265228 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.280465 4748 patch_prober.go:28] interesting pod/console-f9d7485db-4xpkc container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.280555 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4xpkc" podUID="1b6ee71e-062d-49b9-b693-665355764e4f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.343311 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.348894 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-catalog-content\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.348963 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-utilities\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.348989 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkr6h\" (UniqueName: \"kubernetes.io/projected/690e9611-61fc-40f8-b3a6-706d8b6218f0-kube-api-access-zkr6h\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.352001 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-utilities\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.352746 4748 patch_prober.go:28] interesting pod/downloads-7954f5f757-fs2p7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.352795 4748 patch_prober.go:28] interesting pod/downloads-7954f5f757-fs2p7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" start-of-body= Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.353024 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-fs2p7" podUID="8611553d-e8b4-4f6e-a2c0-abb19409cd02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.352899 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-fs2p7" podUID="8611553d-e8b4-4f6e-a2c0-abb19409cd02" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.16:8080/\": dial tcp 10.217.0.16:8080: connect: connection refused" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.353433 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-catalog-content\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.376093 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkr6h\" (UniqueName: \"kubernetes.io/projected/690e9611-61fc-40f8-b3a6-706d8b6218f0-kube-api-access-zkr6h\") pod \"redhat-operators-rpxjm\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.558342 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.589132 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.594542 4748 patch_prober.go:28] interesting pod/router-default-5444994796-hqs5w container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 14:55:27 crc kubenswrapper[4748]: [-]has-synced failed: reason withheld Feb 16 14:55:27 crc kubenswrapper[4748]: [+]process-running ok Feb 16 14:55:27 crc kubenswrapper[4748]: healthz check failed Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.594589 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-hqs5w" podUID="a874c790-da94-4e03-a484-33c6f9126664" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.632991 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nmd5x"] Feb 16 14:55:27 crc kubenswrapper[4748]: W0216 14:55:27.679296 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod883993d1_6837_4aef_952c_720e2901efb5.slice/crio-934a6933cf9ec534faf6de321bfcceca4676402d9bd245616cead4c8e65efce6 WatchSource:0}: Error finding container 934a6933cf9ec534faf6de321bfcceca4676402d9bd245616cead4c8e65efce6: Status 404 returned error can't find the container with id 934a6933cf9ec534faf6de321bfcceca4676402d9bd245616cead4c8e65efce6 Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.828219 4748 generic.go:334] "Generic (PLEG): container finished" podID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerID="0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302" exitCode=0 Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.828306 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdw4" event={"ID":"3c38687a-4ac8-4a60-8e5b-d0a260d6773d","Type":"ContainerDied","Data":"0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302"} Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.828356 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdw4" event={"ID":"3c38687a-4ac8-4a60-8e5b-d0a260d6773d","Type":"ContainerStarted","Data":"a55f095ed3c601fd516022140aa8d813fac654a3f3215a9f3ebf6273ee2b7371"} Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.833112 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmd5x" event={"ID":"883993d1-6837-4aef-952c-720e2901efb5","Type":"ContainerStarted","Data":"934a6933cf9ec534faf6de321bfcceca4676402d9bd245616cead4c8e65efce6"} Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.852782 4748 generic.go:334] "Generic (PLEG): container finished" podID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerID="741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f" exitCode=0 Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.853148 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2jwt" event={"ID":"a62cdd4f-90c2-45c7-893d-35356124bf3c","Type":"ContainerDied","Data":"741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f"} Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.866492 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f81b4e90-9e8a-46f0-80a0-eb1190b19425","Type":"ContainerStarted","Data":"46d2d7773147dd9371856c6b111dc524dcf0a840b6e1f4e727ab21b7beac8d47"} Feb 16 14:55:27 crc kubenswrapper[4748]: I0216 14:55:27.866894 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.032107 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.032789 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.040741 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.044217 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.072088 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.101334 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rpxjm"] Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.167371 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b87b718b-ba60-4684-88ef-fa519fa2e402-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.168157 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87b718b-ba60-4684-88ef-fa519fa2e402-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.269801 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b87b718b-ba60-4684-88ef-fa519fa2e402-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.269880 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87b718b-ba60-4684-88ef-fa519fa2e402-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.270048 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b87b718b-ba60-4684-88ef-fa519fa2e402-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.309336 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87b718b-ba60-4684-88ef-fa519fa2e402-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.386658 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.597203 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.607915 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-hqs5w" Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.902039 4748 generic.go:334] "Generic (PLEG): container finished" podID="883993d1-6837-4aef-952c-720e2901efb5" containerID="2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f" exitCode=0 Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.902137 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmd5x" event={"ID":"883993d1-6837-4aef-952c-720e2901efb5","Type":"ContainerDied","Data":"2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f"} Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.927172 4748 generic.go:334] "Generic (PLEG): container finished" podID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerID="227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad" exitCode=0 Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.927241 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpxjm" event={"ID":"690e9611-61fc-40f8-b3a6-706d8b6218f0","Type":"ContainerDied","Data":"227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad"} Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.927268 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpxjm" event={"ID":"690e9611-61fc-40f8-b3a6-706d8b6218f0","Type":"ContainerStarted","Data":"995891c9dc7cceff51608e03c10204432efb14291fe79cb09ac331aee78ac68e"} Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.937426 4748 generic.go:334] "Generic (PLEG): container finished" podID="f81b4e90-9e8a-46f0-80a0-eb1190b19425" containerID="b751c986b3f760f5fea18af10829075acf5fd88bd995a2d29648a90278d91c4e" exitCode=0 Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.937594 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f81b4e90-9e8a-46f0-80a0-eb1190b19425","Type":"ContainerDied","Data":"b751c986b3f760f5fea18af10829075acf5fd88bd995a2d29648a90278d91c4e"} Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.942401 4748 generic.go:334] "Generic (PLEG): container finished" podID="d4609715-d418-4f84-843a-b916f5e920ec" containerID="85d6648dd409023222d97b7ee859c9058b3359b0e4bfe60ae3363dea3fe6d032" exitCode=0 Feb 16 14:55:28 crc kubenswrapper[4748]: I0216 14:55:28.942477 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" event={"ID":"d4609715-d418-4f84-843a-b916f5e920ec","Type":"ContainerDied","Data":"85d6648dd409023222d97b7ee859c9058b3359b0e4bfe60ae3363dea3fe6d032"} Feb 16 14:55:29 crc kubenswrapper[4748]: I0216 14:55:29.055661 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 16 14:55:29 crc kubenswrapper[4748]: I0216 14:55:29.985727 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b87b718b-ba60-4684-88ef-fa519fa2e402","Type":"ContainerStarted","Data":"ae2be8146a363e0805bc29cf2953add9c4f559e17fcfbd17b01d93bd6671dbd0"} Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.428735 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.508351 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.624200 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4609715-d418-4f84-843a-b916f5e920ec-secret-volume\") pod \"d4609715-d418-4f84-843a-b916f5e920ec\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.624270 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzdgp\" (UniqueName: \"kubernetes.io/projected/d4609715-d418-4f84-843a-b916f5e920ec-kube-api-access-lzdgp\") pod \"d4609715-d418-4f84-843a-b916f5e920ec\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.624334 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4609715-d418-4f84-843a-b916f5e920ec-config-volume\") pod \"d4609715-d418-4f84-843a-b916f5e920ec\" (UID: \"d4609715-d418-4f84-843a-b916f5e920ec\") " Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.624373 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kubelet-dir\") pod \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.624399 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kube-api-access\") pod \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\" (UID: \"f81b4e90-9e8a-46f0-80a0-eb1190b19425\") " Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.625399 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f81b4e90-9e8a-46f0-80a0-eb1190b19425" (UID: "f81b4e90-9e8a-46f0-80a0-eb1190b19425"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.626554 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4609715-d418-4f84-843a-b916f5e920ec-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4609715-d418-4f84-843a-b916f5e920ec" (UID: "d4609715-d418-4f84-843a-b916f5e920ec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.630818 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4609715-d418-4f84-843a-b916f5e920ec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4609715-d418-4f84-843a-b916f5e920ec" (UID: "d4609715-d418-4f84-843a-b916f5e920ec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.632479 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f81b4e90-9e8a-46f0-80a0-eb1190b19425" (UID: "f81b4e90-9e8a-46f0-80a0-eb1190b19425"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.632888 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4609715-d418-4f84-843a-b916f5e920ec-kube-api-access-lzdgp" (OuterVolumeSpecName: "kube-api-access-lzdgp") pod "d4609715-d418-4f84-843a-b916f5e920ec" (UID: "d4609715-d418-4f84-843a-b916f5e920ec"). InnerVolumeSpecName "kube-api-access-lzdgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.729381 4748 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.729423 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f81b4e90-9e8a-46f0-80a0-eb1190b19425-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.729581 4748 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4609715-d418-4f84-843a-b916f5e920ec-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.729599 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzdgp\" (UniqueName: \"kubernetes.io/projected/d4609715-d418-4f84-843a-b916f5e920ec-kube-api-access-lzdgp\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:30 crc kubenswrapper[4748]: I0216 14:55:30.729613 4748 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4609715-d418-4f84-843a-b916f5e920ec-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.021227 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" event={"ID":"d4609715-d418-4f84-843a-b916f5e920ec","Type":"ContainerDied","Data":"c0c9a6635678a4c0ea8cd3b6cfc48bd62f2ab4a95776a2adf41ce316de9b3f7f"} Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.021248 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m" Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.021276 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0c9a6635678a4c0ea8cd3b6cfc48bd62f2ab4a95776a2adf41ce316de9b3f7f" Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.023326 4748 generic.go:334] "Generic (PLEG): container finished" podID="b87b718b-ba60-4684-88ef-fa519fa2e402" containerID="ef9dd1de177309dd814a3e2d08ba5c630033a4678801ad97c61b49478e51dda1" exitCode=0 Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.023395 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b87b718b-ba60-4684-88ef-fa519fa2e402","Type":"ContainerDied","Data":"ef9dd1de177309dd814a3e2d08ba5c630033a4678801ad97c61b49478e51dda1"} Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.025984 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f81b4e90-9e8a-46f0-80a0-eb1190b19425","Type":"ContainerDied","Data":"46d2d7773147dd9371856c6b111dc524dcf0a840b6e1f4e727ab21b7beac8d47"} Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.026004 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d2d7773147dd9371856c6b111dc524dcf0a840b6e1f4e727ab21b7beac8d47" Feb 16 14:55:31 crc kubenswrapper[4748]: I0216 14:55:31.026071 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.528959 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.564886 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b87b718b-ba60-4684-88ef-fa519fa2e402-kubelet-dir\") pod \"b87b718b-ba60-4684-88ef-fa519fa2e402\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.565002 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b87b718b-ba60-4684-88ef-fa519fa2e402-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b87b718b-ba60-4684-88ef-fa519fa2e402" (UID: "b87b718b-ba60-4684-88ef-fa519fa2e402"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.565202 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87b718b-ba60-4684-88ef-fa519fa2e402-kube-api-access\") pod \"b87b718b-ba60-4684-88ef-fa519fa2e402\" (UID: \"b87b718b-ba60-4684-88ef-fa519fa2e402\") " Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.565631 4748 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b87b718b-ba60-4684-88ef-fa519fa2e402-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.575158 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b87b718b-ba60-4684-88ef-fa519fa2e402-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b87b718b-ba60-4684-88ef-fa519fa2e402" (UID: "b87b718b-ba60-4684-88ef-fa519fa2e402"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.667960 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b87b718b-ba60-4684-88ef-fa519fa2e402-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:32 crc kubenswrapper[4748]: I0216 14:55:32.981689 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-k2pmm" Feb 16 14:55:33 crc kubenswrapper[4748]: I0216 14:55:33.082141 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"b87b718b-ba60-4684-88ef-fa519fa2e402","Type":"ContainerDied","Data":"ae2be8146a363e0805bc29cf2953add9c4f559e17fcfbd17b01d93bd6671dbd0"} Feb 16 14:55:33 crc kubenswrapper[4748]: I0216 14:55:33.082192 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae2be8146a363e0805bc29cf2953add9c4f559e17fcfbd17b01d93bd6671dbd0" Feb 16 14:55:33 crc kubenswrapper[4748]: I0216 14:55:33.082204 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 16 14:55:34 crc kubenswrapper[4748]: I0216 14:55:34.729342 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:55:34 crc kubenswrapper[4748]: I0216 14:55:34.729902 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:55:37 crc kubenswrapper[4748]: I0216 14:55:37.278196 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:37 crc kubenswrapper[4748]: I0216 14:55:37.283255 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 14:55:37 crc kubenswrapper[4748]: I0216 14:55:37.361159 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-fs2p7" Feb 16 14:55:37 crc kubenswrapper[4748]: I0216 14:55:37.783545 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:37 crc kubenswrapper[4748]: I0216 14:55:37.793140 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/078f98ca-d871-47a5-96c3-1e818312c4c4-metrics-certs\") pod \"network-metrics-daemon-lll47\" (UID: \"078f98ca-d871-47a5-96c3-1e818312c4c4\") " pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:38 crc kubenswrapper[4748]: I0216 14:55:38.022638 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lll47" Feb 16 14:55:39 crc kubenswrapper[4748]: I0216 14:55:39.136403 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnhj8"] Feb 16 14:55:39 crc kubenswrapper[4748]: I0216 14:55:39.137198 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" podUID="ad452651-e143-4074-8f39-d3074bc487ca" containerName="controller-manager" containerID="cri-o://fe4bde89ec3ec168200337a3f3739acd0fc094b3bf0c69bb4eab771be5eae8fc" gracePeriod=30 Feb 16 14:55:39 crc kubenswrapper[4748]: I0216 14:55:39.144420 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh"] Feb 16 14:55:39 crc kubenswrapper[4748]: I0216 14:55:39.144688 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" podUID="1005279f-a20b-43cd-957a-731252114f31" containerName="route-controller-manager" containerID="cri-o://6aa69f22407f191a4989278999547ba255c93a2d259bc462b8f9d407033efe60" gracePeriod=30 Feb 16 14:55:40 crc kubenswrapper[4748]: I0216 14:55:40.181053 4748 generic.go:334] "Generic (PLEG): container finished" podID="ad452651-e143-4074-8f39-d3074bc487ca" containerID="fe4bde89ec3ec168200337a3f3739acd0fc094b3bf0c69bb4eab771be5eae8fc" exitCode=0 Feb 16 14:55:40 crc kubenswrapper[4748]: I0216 14:55:40.181166 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" event={"ID":"ad452651-e143-4074-8f39-d3074bc487ca","Type":"ContainerDied","Data":"fe4bde89ec3ec168200337a3f3739acd0fc094b3bf0c69bb4eab771be5eae8fc"} Feb 16 14:55:40 crc kubenswrapper[4748]: I0216 14:55:40.184652 4748 generic.go:334] "Generic (PLEG): container finished" podID="1005279f-a20b-43cd-957a-731252114f31" containerID="6aa69f22407f191a4989278999547ba255c93a2d259bc462b8f9d407033efe60" exitCode=0 Feb 16 14:55:40 crc kubenswrapper[4748]: I0216 14:55:40.184728 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" event={"ID":"1005279f-a20b-43cd-957a-731252114f31","Type":"ContainerDied","Data":"6aa69f22407f191a4989278999547ba255c93a2d259bc462b8f9d407033efe60"} Feb 16 14:55:45 crc kubenswrapper[4748]: I0216 14:55:45.359823 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:55:46 crc kubenswrapper[4748]: I0216 14:55:46.720505 4748 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hnhj8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 14:55:46 crc kubenswrapper[4748]: I0216 14:55:46.721227 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" podUID="ad452651-e143-4074-8f39-d3074bc487ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 14:55:48 crc kubenswrapper[4748]: I0216 14:55:48.335101 4748 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-dl2zh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:55:48 crc kubenswrapper[4748]: I0216 14:55:48.335815 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" podUID="1005279f-a20b-43cd-957a-731252114f31" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.18:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.789809 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827177 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw"] Feb 16 14:55:49 crc kubenswrapper[4748]: E0216 14:55:49.827558 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1005279f-a20b-43cd-957a-731252114f31" containerName="route-controller-manager" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827578 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="1005279f-a20b-43cd-957a-731252114f31" containerName="route-controller-manager" Feb 16 14:55:49 crc kubenswrapper[4748]: E0216 14:55:49.827592 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b87b718b-ba60-4684-88ef-fa519fa2e402" containerName="pruner" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827602 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="b87b718b-ba60-4684-88ef-fa519fa2e402" containerName="pruner" Feb 16 14:55:49 crc kubenswrapper[4748]: E0216 14:55:49.827618 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4609715-d418-4f84-843a-b916f5e920ec" containerName="collect-profiles" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827627 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4609715-d418-4f84-843a-b916f5e920ec" containerName="collect-profiles" Feb 16 14:55:49 crc kubenswrapper[4748]: E0216 14:55:49.827641 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f81b4e90-9e8a-46f0-80a0-eb1190b19425" containerName="pruner" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827652 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f81b4e90-9e8a-46f0-80a0-eb1190b19425" containerName="pruner" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827788 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="1005279f-a20b-43cd-957a-731252114f31" containerName="route-controller-manager" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827806 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="b87b718b-ba60-4684-88ef-fa519fa2e402" containerName="pruner" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827818 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4609715-d418-4f84-843a-b916f5e920ec" containerName="collect-profiles" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.827831 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f81b4e90-9e8a-46f0-80a0-eb1190b19425" containerName="pruner" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.828395 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.830188 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw"] Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.980153 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tztz8\" (UniqueName: \"kubernetes.io/projected/1005279f-a20b-43cd-957a-731252114f31-kube-api-access-tztz8\") pod \"1005279f-a20b-43cd-957a-731252114f31\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.981603 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-client-ca\") pod \"1005279f-a20b-43cd-957a-731252114f31\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.981638 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1005279f-a20b-43cd-957a-731252114f31-serving-cert\") pod \"1005279f-a20b-43cd-957a-731252114f31\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.981812 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-config\") pod \"1005279f-a20b-43cd-957a-731252114f31\" (UID: \"1005279f-a20b-43cd-957a-731252114f31\") " Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.982012 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-config\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.982058 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvn88\" (UniqueName: \"kubernetes.io/projected/54b5c915-e3b7-4906-b950-7cf8973ce6d9-kube-api-access-dvn88\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.982099 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b5c915-e3b7-4906-b950-7cf8973ce6d9-serving-cert\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.982165 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-client-ca\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.982373 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-client-ca" (OuterVolumeSpecName: "client-ca") pod "1005279f-a20b-43cd-957a-731252114f31" (UID: "1005279f-a20b-43cd-957a-731252114f31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.982771 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-config" (OuterVolumeSpecName: "config") pod "1005279f-a20b-43cd-957a-731252114f31" (UID: "1005279f-a20b-43cd-957a-731252114f31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.988979 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1005279f-a20b-43cd-957a-731252114f31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1005279f-a20b-43cd-957a-731252114f31" (UID: "1005279f-a20b-43cd-957a-731252114f31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:55:49 crc kubenswrapper[4748]: I0216 14:55:49.989035 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1005279f-a20b-43cd-957a-731252114f31-kube-api-access-tztz8" (OuterVolumeSpecName: "kube-api-access-tztz8") pod "1005279f-a20b-43cd-957a-731252114f31" (UID: "1005279f-a20b-43cd-957a-731252114f31"). InnerVolumeSpecName "kube-api-access-tztz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083477 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-client-ca\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083545 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-config\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083566 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvn88\" (UniqueName: \"kubernetes.io/projected/54b5c915-e3b7-4906-b950-7cf8973ce6d9-kube-api-access-dvn88\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083598 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b5c915-e3b7-4906-b950-7cf8973ce6d9-serving-cert\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083639 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tztz8\" (UniqueName: \"kubernetes.io/projected/1005279f-a20b-43cd-957a-731252114f31-kube-api-access-tztz8\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083649 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083658 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1005279f-a20b-43cd-957a-731252114f31-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.083667 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1005279f-a20b-43cd-957a-731252114f31-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.085298 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-client-ca\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.085844 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-config\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.089072 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b5c915-e3b7-4906-b950-7cf8973ce6d9-serving-cert\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.103581 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvn88\" (UniqueName: \"kubernetes.io/projected/54b5c915-e3b7-4906-b950-7cf8973ce6d9-kube-api-access-dvn88\") pod \"route-controller-manager-5687cfbc7f-xqxtw\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.155785 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.259326 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" event={"ID":"1005279f-a20b-43cd-957a-731252114f31","Type":"ContainerDied","Data":"9735d5237c92f1f7453e1ef3601f6bb0895ce6ee30cde3a4e69e89d639acda27"} Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.259392 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.259403 4748 scope.go:117] "RemoveContainer" containerID="6aa69f22407f191a4989278999547ba255c93a2d259bc462b8f9d407033efe60" Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.298422 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh"] Feb 16 14:55:50 crc kubenswrapper[4748]: I0216 14:55:50.302938 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-dl2zh"] Feb 16 14:55:51 crc kubenswrapper[4748]: I0216 14:55:51.002085 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1005279f-a20b-43cd-957a-731252114f31" path="/var/lib/kubelet/pods/1005279f-a20b-43cd-957a-731252114f31/volumes" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.698156 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-j6s5s" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.720977 4748 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-hnhj8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.721384 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" podUID="ad452651-e143-4074-8f39-d3074bc487ca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.736694 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:57 crc kubenswrapper[4748]: E0216 14:55:57.737986 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 14:55:57 crc kubenswrapper[4748]: E0216 14:55:57.738137 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrg62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nmd5x_openshift-marketplace(883993d1-6837-4aef-952c-720e2901efb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 14:55:57 crc kubenswrapper[4748]: E0216 14:55:57.739289 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nmd5x" podUID="883993d1-6837-4aef-952c-720e2901efb5" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.771114 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc"] Feb 16 14:55:57 crc kubenswrapper[4748]: E0216 14:55:57.771352 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad452651-e143-4074-8f39-d3074bc487ca" containerName="controller-manager" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.771364 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad452651-e143-4074-8f39-d3074bc487ca" containerName="controller-manager" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.771458 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad452651-e143-4074-8f39-d3074bc487ca" containerName="controller-manager" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.771899 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.791953 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc"] Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.795590 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-client-ca\") pod \"ad452651-e143-4074-8f39-d3074bc487ca\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.795646 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx986\" (UniqueName: \"kubernetes.io/projected/ad452651-e143-4074-8f39-d3074bc487ca-kube-api-access-qx986\") pod \"ad452651-e143-4074-8f39-d3074bc487ca\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.795789 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-proxy-ca-bundles\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.795842 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/998d89f0-cd93-4a49-8727-1b1b13b66894-serving-cert\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.795873 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-client-ca\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.795891 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx98s\" (UniqueName: \"kubernetes.io/projected/998d89f0-cd93-4a49-8727-1b1b13b66894-kube-api-access-dx98s\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.795914 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-config\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.797412 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "ad452651-e143-4074-8f39-d3074bc487ca" (UID: "ad452651-e143-4074-8f39-d3074bc487ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.814979 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad452651-e143-4074-8f39-d3074bc487ca-kube-api-access-qx986" (OuterVolumeSpecName: "kube-api-access-qx986") pod "ad452651-e143-4074-8f39-d3074bc487ca" (UID: "ad452651-e143-4074-8f39-d3074bc487ca"). InnerVolumeSpecName "kube-api-access-qx986". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:55:57 crc kubenswrapper[4748]: E0216 14:55:57.853393 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 16 14:55:57 crc kubenswrapper[4748]: E0216 14:55:57.853550 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zkr6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-rpxjm_openshift-marketplace(690e9611-61fc-40f8-b3a6-706d8b6218f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 14:55:57 crc kubenswrapper[4748]: E0216 14:55:57.854932 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-rpxjm" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896426 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-proxy-ca-bundles\") pod \"ad452651-e143-4074-8f39-d3074bc487ca\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896527 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad452651-e143-4074-8f39-d3074bc487ca-serving-cert\") pod \"ad452651-e143-4074-8f39-d3074bc487ca\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896568 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-config\") pod \"ad452651-e143-4074-8f39-d3074bc487ca\" (UID: \"ad452651-e143-4074-8f39-d3074bc487ca\") " Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896704 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-config\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896850 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-proxy-ca-bundles\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896901 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/998d89f0-cd93-4a49-8727-1b1b13b66894-serving-cert\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896951 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-client-ca\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.896986 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx98s\" (UniqueName: \"kubernetes.io/projected/998d89f0-cd93-4a49-8727-1b1b13b66894-kube-api-access-dx98s\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.897060 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.897077 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx986\" (UniqueName: \"kubernetes.io/projected/ad452651-e143-4074-8f39-d3074bc487ca-kube-api-access-qx986\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.897565 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ad452651-e143-4074-8f39-d3074bc487ca" (UID: "ad452651-e143-4074-8f39-d3074bc487ca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.898966 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-client-ca\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.899373 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-config" (OuterVolumeSpecName: "config") pod "ad452651-e143-4074-8f39-d3074bc487ca" (UID: "ad452651-e143-4074-8f39-d3074bc487ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.900730 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-config\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.900782 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-proxy-ca-bundles\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.901383 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/998d89f0-cd93-4a49-8727-1b1b13b66894-serving-cert\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.913782 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx98s\" (UniqueName: \"kubernetes.io/projected/998d89f0-cd93-4a49-8727-1b1b13b66894-kube-api-access-dx98s\") pod \"controller-manager-5d4dfb9c5f-hkmdc\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.953859 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad452651-e143-4074-8f39-d3074bc487ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ad452651-e143-4074-8f39-d3074bc487ca" (UID: "ad452651-e143-4074-8f39-d3074bc487ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.997901 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.997926 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ad452651-e143-4074-8f39-d3074bc487ca-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:57 crc kubenswrapper[4748]: I0216 14:55:57.997935 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad452651-e143-4074-8f39-d3074bc487ca-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.047392 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lll47"] Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.092562 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:58 crc kubenswrapper[4748]: W0216 14:55:58.138555 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod078f98ca_d871_47a5_96c3_1e818312c4c4.slice/crio-a7994cc5e7cadba797eac5d3f59daa474f053258565147a7d10d7705f69725bf WatchSource:0}: Error finding container a7994cc5e7cadba797eac5d3f59daa474f053258565147a7d10d7705f69725bf: Status 404 returned error can't find the container with id a7994cc5e7cadba797eac5d3f59daa474f053258565147a7d10d7705f69725bf Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.323497 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lll47" event={"ID":"078f98ca-d871-47a5-96c3-1e818312c4c4","Type":"ContainerStarted","Data":"a7994cc5e7cadba797eac5d3f59daa474f053258565147a7d10d7705f69725bf"} Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.325668 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.325828 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hnhj8" event={"ID":"ad452651-e143-4074-8f39-d3074bc487ca","Type":"ContainerDied","Data":"86b96f8c25b4dafccc42f83506192f90336d7904467d346cfa5a4a0bdcdb3746"} Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.325974 4748 scope.go:117] "RemoveContainer" containerID="fe4bde89ec3ec168200337a3f3739acd0fc094b3bf0c69bb4eab771be5eae8fc" Feb 16 14:55:58 crc kubenswrapper[4748]: E0216 14:55:58.341288 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-rpxjm" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" Feb 16 14:55:58 crc kubenswrapper[4748]: E0216 14:55:58.342600 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nmd5x" podUID="883993d1-6837-4aef-952c-720e2901efb5" Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.458468 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw"] Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.479482 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnhj8"] Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.486216 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hnhj8"] Feb 16 14:55:58 crc kubenswrapper[4748]: I0216 14:55:58.496392 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc"] Feb 16 14:55:58 crc kubenswrapper[4748]: W0216 14:55:58.527960 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod998d89f0_cd93_4a49_8727_1b1b13b66894.slice/crio-948d76fc3950c6aff5cf0934d72f60ecc928a137b3390983c950d2c7199e3040 WatchSource:0}: Error finding container 948d76fc3950c6aff5cf0934d72f60ecc928a137b3390983c950d2c7199e3040: Status 404 returned error can't find the container with id 948d76fc3950c6aff5cf0934d72f60ecc928a137b3390983c950d2c7199e3040 Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.005481 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad452651-e143-4074-8f39-d3074bc487ca" path="/var/lib/kubelet/pods/ad452651-e143-4074-8f39-d3074bc487ca/volumes" Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.103155 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc"] Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.248908 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw"] Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.348289 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lll47" event={"ID":"078f98ca-d871-47a5-96c3-1e818312c4c4","Type":"ContainerStarted","Data":"c37b5885df88ad194449a5a41fa2d07e36b0940741366e534ae4ebe384d61e88"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.348338 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lll47" event={"ID":"078f98ca-d871-47a5-96c3-1e818312c4c4","Type":"ContainerStarted","Data":"ae9aa4d9551019ae6831c41d2999b2a0402e3006f0cda989b5036052f1103eb0"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.352934 4748 generic.go:334] "Generic (PLEG): container finished" podID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerID="fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.353222 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hlms" event={"ID":"3807d8da-105e-4dbb-8446-81b6d4a2ae05","Type":"ContainerDied","Data":"fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.359942 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" event={"ID":"54b5c915-e3b7-4906-b950-7cf8973ce6d9","Type":"ContainerStarted","Data":"10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.359990 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" event={"ID":"54b5c915-e3b7-4906-b950-7cf8973ce6d9","Type":"ContainerStarted","Data":"18ab456a3f1b1df4be9010c2550dc40dc7543f8a807fc387662111ef026f52a4"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.360796 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.364541 4748 generic.go:334] "Generic (PLEG): container finished" podID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerID="c80415744875bf98e3ed468b4e3315be2ecafaa05b5562eae2e127eb6af3c111" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.364629 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk29" event={"ID":"9ca47343-f274-4f11-84ed-6fc055cf41f9","Type":"ContainerDied","Data":"c80415744875bf98e3ed468b4e3315be2ecafaa05b5562eae2e127eb6af3c111"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.366903 4748 generic.go:334] "Generic (PLEG): container finished" podID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerID="d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.367129 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdw4" event={"ID":"3c38687a-4ac8-4a60-8e5b-d0a260d6773d","Type":"ContainerDied","Data":"d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.370701 4748 generic.go:334] "Generic (PLEG): container finished" podID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerID="68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.370768 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2jwt" event={"ID":"a62cdd4f-90c2-45c7-893d-35356124bf3c","Type":"ContainerDied","Data":"68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.373443 4748 generic.go:334] "Generic (PLEG): container finished" podID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerID="fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.373482 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2jmq" event={"ID":"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a","Type":"ContainerDied","Data":"fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.376194 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" event={"ID":"998d89f0-cd93-4a49-8727-1b1b13b66894","Type":"ContainerStarted","Data":"bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.376214 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" event={"ID":"998d89f0-cd93-4a49-8727-1b1b13b66894","Type":"ContainerStarted","Data":"948d76fc3950c6aff5cf0934d72f60ecc928a137b3390983c950d2c7199e3040"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.376669 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.382085 4748 generic.go:334] "Generic (PLEG): container finished" podID="712cdaea-3348-4e68-8761-842d520bedc6" containerID="a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9" exitCode=0 Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.382131 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9vdw" event={"ID":"712cdaea-3348-4e68-8761-842d520bedc6","Type":"ContainerDied","Data":"a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9"} Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.385051 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.398207 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" podStartSLOduration=20.39818808 podStartE2EDuration="20.39818808s" podCreationTimestamp="2026-02-16 14:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:59.397830201 +0000 UTC m=+185.089499260" watchObservedRunningTime="2026-02-16 14:55:59.39818808 +0000 UTC m=+185.089857119" Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.402480 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:55:59 crc kubenswrapper[4748]: I0216 14:55:59.439210 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" podStartSLOduration=20.439192731 podStartE2EDuration="20.439192731s" podCreationTimestamp="2026-02-16 14:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:55:59.438937284 +0000 UTC m=+185.130606323" watchObservedRunningTime="2026-02-16 14:55:59.439192731 +0000 UTC m=+185.130861770" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.387707 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" podUID="54b5c915-e3b7-4906-b950-7cf8973ce6d9" containerName="route-controller-manager" containerID="cri-o://10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9" gracePeriod=30 Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.387843 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" podUID="998d89f0-cd93-4a49-8727-1b1b13b66894" containerName="controller-manager" containerID="cri-o://bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7" gracePeriod=30 Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.412491 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lll47" podStartSLOduration=165.412470665 podStartE2EDuration="2m45.412470665s" podCreationTimestamp="2026-02-16 14:53:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:00.410983638 +0000 UTC m=+186.102652687" watchObservedRunningTime="2026-02-16 14:56:00.412470665 +0000 UTC m=+186.104139704" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.858653 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.879136 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.920850 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb"] Feb 16 14:56:00 crc kubenswrapper[4748]: E0216 14:56:00.921107 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54b5c915-e3b7-4906-b950-7cf8973ce6d9" containerName="route-controller-manager" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.921118 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="54b5c915-e3b7-4906-b950-7cf8973ce6d9" containerName="route-controller-manager" Feb 16 14:56:00 crc kubenswrapper[4748]: E0216 14:56:00.921129 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="998d89f0-cd93-4a49-8727-1b1b13b66894" containerName="controller-manager" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.921135 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="998d89f0-cd93-4a49-8727-1b1b13b66894" containerName="controller-manager" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.921257 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="54b5c915-e3b7-4906-b950-7cf8973ce6d9" containerName="route-controller-manager" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.921270 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="998d89f0-cd93-4a49-8727-1b1b13b66894" containerName="controller-manager" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.921637 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.925235 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb"] Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.966665 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-client-ca\") pod \"998d89f0-cd93-4a49-8727-1b1b13b66894\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.966736 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/998d89f0-cd93-4a49-8727-1b1b13b66894-serving-cert\") pod \"998d89f0-cd93-4a49-8727-1b1b13b66894\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.966836 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx98s\" (UniqueName: \"kubernetes.io/projected/998d89f0-cd93-4a49-8727-1b1b13b66894-kube-api-access-dx98s\") pod \"998d89f0-cd93-4a49-8727-1b1b13b66894\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.966885 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-proxy-ca-bundles\") pod \"998d89f0-cd93-4a49-8727-1b1b13b66894\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.966968 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-config\") pod \"998d89f0-cd93-4a49-8727-1b1b13b66894\" (UID: \"998d89f0-cd93-4a49-8727-1b1b13b66894\") " Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.967944 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "998d89f0-cd93-4a49-8727-1b1b13b66894" (UID: "998d89f0-cd93-4a49-8727-1b1b13b66894"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.968251 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-client-ca" (OuterVolumeSpecName: "client-ca") pod "998d89f0-cd93-4a49-8727-1b1b13b66894" (UID: "998d89f0-cd93-4a49-8727-1b1b13b66894"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.968274 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-config" (OuterVolumeSpecName: "config") pod "998d89f0-cd93-4a49-8727-1b1b13b66894" (UID: "998d89f0-cd93-4a49-8727-1b1b13b66894"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.972697 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/998d89f0-cd93-4a49-8727-1b1b13b66894-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "998d89f0-cd93-4a49-8727-1b1b13b66894" (UID: "998d89f0-cd93-4a49-8727-1b1b13b66894"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:00 crc kubenswrapper[4748]: I0216 14:56:00.973031 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/998d89f0-cd93-4a49-8727-1b1b13b66894-kube-api-access-dx98s" (OuterVolumeSpecName: "kube-api-access-dx98s") pod "998d89f0-cd93-4a49-8727-1b1b13b66894" (UID: "998d89f0-cd93-4a49-8727-1b1b13b66894"). InnerVolumeSpecName "kube-api-access-dx98s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.068439 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b5c915-e3b7-4906-b950-7cf8973ce6d9-serving-cert\") pod \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.068521 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvn88\" (UniqueName: \"kubernetes.io/projected/54b5c915-e3b7-4906-b950-7cf8973ce6d9-kube-api-access-dvn88\") pod \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.068568 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-client-ca\") pod \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.068587 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-config\") pod \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\" (UID: \"54b5c915-e3b7-4906-b950-7cf8973ce6d9\") " Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.068811 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6sgc\" (UniqueName: \"kubernetes.io/projected/611a95e5-7828-4f25-9db8-a3827d631750-kube-api-access-h6sgc\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.068870 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-client-ca\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.068898 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611a95e5-7828-4f25-9db8-a3827d631750-serving-cert\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069222 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-proxy-ca-bundles\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069294 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-config\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069402 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-client-ca" (OuterVolumeSpecName: "client-ca") pod "54b5c915-e3b7-4906-b950-7cf8973ce6d9" (UID: "54b5c915-e3b7-4906-b950-7cf8973ce6d9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069474 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069494 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069506 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069515 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/998d89f0-cd93-4a49-8727-1b1b13b66894-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069523 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/998d89f0-cd93-4a49-8727-1b1b13b66894-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069533 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx98s\" (UniqueName: \"kubernetes.io/projected/998d89f0-cd93-4a49-8727-1b1b13b66894-kube-api-access-dx98s\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.069633 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-config" (OuterVolumeSpecName: "config") pod "54b5c915-e3b7-4906-b950-7cf8973ce6d9" (UID: "54b5c915-e3b7-4906-b950-7cf8973ce6d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.072193 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54b5c915-e3b7-4906-b950-7cf8973ce6d9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "54b5c915-e3b7-4906-b950-7cf8973ce6d9" (UID: "54b5c915-e3b7-4906-b950-7cf8973ce6d9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.074125 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54b5c915-e3b7-4906-b950-7cf8973ce6d9-kube-api-access-dvn88" (OuterVolumeSpecName: "kube-api-access-dvn88") pod "54b5c915-e3b7-4906-b950-7cf8973ce6d9" (UID: "54b5c915-e3b7-4906-b950-7cf8973ce6d9"). InnerVolumeSpecName "kube-api-access-dvn88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.170760 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611a95e5-7828-4f25-9db8-a3827d631750-serving-cert\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.170816 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-proxy-ca-bundles\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.170837 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-config\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.170882 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6sgc\" (UniqueName: \"kubernetes.io/projected/611a95e5-7828-4f25-9db8-a3827d631750-kube-api-access-h6sgc\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.170927 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-client-ca\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.170984 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54b5c915-e3b7-4906-b950-7cf8973ce6d9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.171013 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvn88\" (UniqueName: \"kubernetes.io/projected/54b5c915-e3b7-4906-b950-7cf8973ce6d9-kube-api-access-dvn88\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.171025 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54b5c915-e3b7-4906-b950-7cf8973ce6d9-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.172376 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-config\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.172588 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-client-ca\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.172811 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-proxy-ca-bundles\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.175788 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611a95e5-7828-4f25-9db8-a3827d631750-serving-cert\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.190971 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6sgc\" (UniqueName: \"kubernetes.io/projected/611a95e5-7828-4f25-9db8-a3827d631750-kube-api-access-h6sgc\") pod \"controller-manager-c8f9c4b96-wmvxb\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.238329 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.397480 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk29" event={"ID":"9ca47343-f274-4f11-84ed-6fc055cf41f9","Type":"ContainerStarted","Data":"370eaf64925530e00becbfe65574214732480b26783efa70aae344cf01c3c188"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.405222 4748 generic.go:334] "Generic (PLEG): container finished" podID="998d89f0-cd93-4a49-8727-1b1b13b66894" containerID="bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7" exitCode=0 Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.405371 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" event={"ID":"998d89f0-cd93-4a49-8727-1b1b13b66894","Type":"ContainerDied","Data":"bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.405414 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" event={"ID":"998d89f0-cd93-4a49-8727-1b1b13b66894","Type":"ContainerDied","Data":"948d76fc3950c6aff5cf0934d72f60ecc928a137b3390983c950d2c7199e3040"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.405438 4748 scope.go:117] "RemoveContainer" containerID="bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.405574 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.413887 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdw4" event={"ID":"3c38687a-4ac8-4a60-8e5b-d0a260d6773d","Type":"ContainerStarted","Data":"825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.420415 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-crk29" podStartSLOduration=2.686151527 podStartE2EDuration="37.420399391s" podCreationTimestamp="2026-02-16 14:55:24 +0000 UTC" firstStartedPulling="2026-02-16 14:55:25.723902518 +0000 UTC m=+151.415571557" lastFinishedPulling="2026-02-16 14:56:00.458150382 +0000 UTC m=+186.149819421" observedRunningTime="2026-02-16 14:56:01.419451698 +0000 UTC m=+187.111120737" watchObservedRunningTime="2026-02-16 14:56:01.420399391 +0000 UTC m=+187.112068430" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.426302 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9vdw" event={"ID":"712cdaea-3348-4e68-8761-842d520bedc6","Type":"ContainerStarted","Data":"75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.428931 4748 scope.go:117] "RemoveContainer" containerID="bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7" Feb 16 14:56:01 crc kubenswrapper[4748]: E0216 14:56:01.430048 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7\": container with ID starting with bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7 not found: ID does not exist" containerID="bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.430081 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7"} err="failed to get container status \"bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7\": rpc error: code = NotFound desc = could not find container \"bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7\": container with ID starting with bdda5dd9b49b0313d77706c8e24c05906245b369cb9b4ca5297f0d2b9d23ace7 not found: ID does not exist" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.438431 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc"] Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.444563 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hlms" event={"ID":"3807d8da-105e-4dbb-8446-81b6d4a2ae05","Type":"ContainerStarted","Data":"ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.444615 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d4dfb9c5f-hkmdc"] Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.462893 4748 generic.go:334] "Generic (PLEG): container finished" podID="54b5c915-e3b7-4906-b950-7cf8973ce6d9" containerID="10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9" exitCode=0 Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.463293 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" event={"ID":"54b5c915-e3b7-4906-b950-7cf8973ce6d9","Type":"ContainerDied","Data":"10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.463348 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" event={"ID":"54b5c915-e3b7-4906-b950-7cf8973ce6d9","Type":"ContainerDied","Data":"18ab456a3f1b1df4be9010c2550dc40dc7543f8a807fc387662111ef026f52a4"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.463377 4748 scope.go:117] "RemoveContainer" containerID="10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.463625 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.495952 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9jdw4" podStartSLOduration=2.608279317 podStartE2EDuration="35.49592736s" podCreationTimestamp="2026-02-16 14:55:26 +0000 UTC" firstStartedPulling="2026-02-16 14:55:27.831970096 +0000 UTC m=+153.523639125" lastFinishedPulling="2026-02-16 14:56:00.719618129 +0000 UTC m=+186.411287168" observedRunningTime="2026-02-16 14:56:01.486440124 +0000 UTC m=+187.178109163" watchObservedRunningTime="2026-02-16 14:56:01.49592736 +0000 UTC m=+187.187596389" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.496920 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2jwt" event={"ID":"a62cdd4f-90c2-45c7-893d-35356124bf3c","Type":"ContainerStarted","Data":"ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.499047 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2jmq" event={"ID":"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a","Type":"ContainerStarted","Data":"823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09"} Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.512672 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb"] Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.525197 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s9vdw" podStartSLOduration=3.877856648 podStartE2EDuration="38.525175618s" podCreationTimestamp="2026-02-16 14:55:23 +0000 UTC" firstStartedPulling="2026-02-16 14:55:25.731871196 +0000 UTC m=+151.423540235" lastFinishedPulling="2026-02-16 14:56:00.379190166 +0000 UTC m=+186.070859205" observedRunningTime="2026-02-16 14:56:01.522738647 +0000 UTC m=+187.214407686" watchObservedRunningTime="2026-02-16 14:56:01.525175618 +0000 UTC m=+187.216844657" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.528700 4748 scope.go:117] "RemoveContainer" containerID="10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9" Feb 16 14:56:01 crc kubenswrapper[4748]: E0216 14:56:01.531551 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9\": container with ID starting with 10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9 not found: ID does not exist" containerID="10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.531614 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9"} err="failed to get container status \"10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9\": rpc error: code = NotFound desc = could not find container \"10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9\": container with ID starting with 10181fe2bd7c35da6f893c23a2453171c220ca2a047c570a22d8cdba1057b8a9 not found: ID does not exist" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.549629 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2hlms" podStartSLOduration=3.6625454189999997 podStartE2EDuration="38.549609086s" podCreationTimestamp="2026-02-16 14:55:23 +0000 UTC" firstStartedPulling="2026-02-16 14:55:25.739098676 +0000 UTC m=+151.430767715" lastFinishedPulling="2026-02-16 14:56:00.626162343 +0000 UTC m=+186.317831382" observedRunningTime="2026-02-16 14:56:01.549085813 +0000 UTC m=+187.240754852" watchObservedRunningTime="2026-02-16 14:56:01.549609086 +0000 UTC m=+187.241278125" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.569934 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-w2jmq" podStartSLOduration=3.740063018 podStartE2EDuration="38.569911652s" podCreationTimestamp="2026-02-16 14:55:23 +0000 UTC" firstStartedPulling="2026-02-16 14:55:25.707053738 +0000 UTC m=+151.398722767" lastFinishedPulling="2026-02-16 14:56:00.536902362 +0000 UTC m=+186.228571401" observedRunningTime="2026-02-16 14:56:01.56662632 +0000 UTC m=+187.258295519" watchObservedRunningTime="2026-02-16 14:56:01.569911652 +0000 UTC m=+187.261580681" Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.586731 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw"] Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.601678 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5687cfbc7f-xqxtw"] Feb 16 14:56:01 crc kubenswrapper[4748]: I0216 14:56:01.605296 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p2jwt" podStartSLOduration=4.009168314 podStartE2EDuration="36.605282112s" podCreationTimestamp="2026-02-16 14:55:25 +0000 UTC" firstStartedPulling="2026-02-16 14:55:27.863437339 +0000 UTC m=+153.555106378" lastFinishedPulling="2026-02-16 14:56:00.459551137 +0000 UTC m=+186.151220176" observedRunningTime="2026-02-16 14:56:01.603161269 +0000 UTC m=+187.294830308" watchObservedRunningTime="2026-02-16 14:56:01.605282112 +0000 UTC m=+187.296951151" Feb 16 14:56:02 crc kubenswrapper[4748]: I0216 14:56:02.504955 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" event={"ID":"611a95e5-7828-4f25-9db8-a3827d631750","Type":"ContainerStarted","Data":"5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839"} Feb 16 14:56:02 crc kubenswrapper[4748]: I0216 14:56:02.505001 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" event={"ID":"611a95e5-7828-4f25-9db8-a3827d631750","Type":"ContainerStarted","Data":"f1ead260b4720dd163b5732ea970fcc1a3c07973f480e269b79abbfb755b248f"} Feb 16 14:56:02 crc kubenswrapper[4748]: I0216 14:56:02.505180 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:02 crc kubenswrapper[4748]: I0216 14:56:02.518467 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:02 crc kubenswrapper[4748]: I0216 14:56:02.526931 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" podStartSLOduration=3.52691203 podStartE2EDuration="3.52691203s" podCreationTimestamp="2026-02-16 14:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:02.523730391 +0000 UTC m=+188.215399430" watchObservedRunningTime="2026-02-16 14:56:02.52691203 +0000 UTC m=+188.218581069" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.011875 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54b5c915-e3b7-4906-b950-7cf8973ce6d9" path="/var/lib/kubelet/pods/54b5c915-e3b7-4906-b950-7cf8973ce6d9/volumes" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.012603 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="998d89f0-cd93-4a49-8727-1b1b13b66894" path="/var/lib/kubelet/pods/998d89f0-cd93-4a49-8727-1b1b13b66894/volumes" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.080286 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.673198 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq"] Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.674738 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.685339 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.685547 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.685740 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.685812 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.685908 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.686474 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.705253 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq"] Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.819568 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-client-ca\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.819625 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f574c3-3516-4548-a7b8-9bceca028664-serving-cert\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.819697 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8tk8\" (UniqueName: \"kubernetes.io/projected/35f574c3-3516-4548-a7b8-9bceca028664-kube-api-access-g8tk8\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.819800 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-config\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.920770 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-config\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.920857 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-client-ca\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.920884 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f574c3-3516-4548-a7b8-9bceca028664-serving-cert\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.920925 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8tk8\" (UniqueName: \"kubernetes.io/projected/35f574c3-3516-4548-a7b8-9bceca028664-kube-api-access-g8tk8\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.921753 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-client-ca\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.922088 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-config\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.929033 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f574c3-3516-4548-a7b8-9bceca028664-serving-cert\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:03 crc kubenswrapper[4748]: I0216 14:56:03.947220 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8tk8\" (UniqueName: \"kubernetes.io/projected/35f574c3-3516-4548-a7b8-9bceca028664-kube-api-access-g8tk8\") pod \"route-controller-manager-6dfcdbc84b-qf2pq\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.001772 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.045908 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.045985 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.068991 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.069100 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.281255 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq"] Feb 16 14:56:04 crc kubenswrapper[4748]: W0216 14:56:04.289289 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35f574c3_3516_4548_a7b8_9bceca028664.slice/crio-9878f08dd156b98fb2505cad1d38eff40d7c15b7c8d5c1364dae340b8e87520b WatchSource:0}: Error finding container 9878f08dd156b98fb2505cad1d38eff40d7c15b7c8d5c1364dae340b8e87520b: Status 404 returned error can't find the container with id 9878f08dd156b98fb2505cad1d38eff40d7c15b7c8d5c1364dae340b8e87520b Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.330418 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.336669 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.436747 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.436797 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.483351 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.519324 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" event={"ID":"35f574c3-3516-4548-a7b8-9bceca028664","Type":"ContainerStarted","Data":"9878f08dd156b98fb2505cad1d38eff40d7c15b7c8d5c1364dae340b8e87520b"} Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.522377 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.522416 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.562853 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.617635 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.618510 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.620600 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.621607 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.624763 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.729169 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.729233 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.734967 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87771814-5361-4ed7-b8bb-6f47b5646a4a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.735055 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87771814-5361-4ed7-b8bb-6f47b5646a4a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.836655 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87771814-5361-4ed7-b8bb-6f47b5646a4a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.836699 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87771814-5361-4ed7-b8bb-6f47b5646a4a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.836790 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87771814-5361-4ed7-b8bb-6f47b5646a4a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.857460 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87771814-5361-4ed7-b8bb-6f47b5646a4a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:04 crc kubenswrapper[4748]: I0216 14:56:04.935457 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.406816 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.504398 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bw7n"] Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.541975 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" event={"ID":"35f574c3-3516-4548-a7b8-9bceca028664","Type":"ContainerStarted","Data":"cb29c369b9667b65fdf0586add8ba2b05fcb0194dd140e740b603511a7178fb7"} Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.542615 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.546099 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87771814-5361-4ed7-b8bb-6f47b5646a4a","Type":"ContainerStarted","Data":"c541cf238ca00c7ff850a102e52c9425bac96606e6a9de6eee641f3efd937f5e"} Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.551424 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.571254 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" podStartSLOduration=6.5712345899999995 podStartE2EDuration="6.57123459s" podCreationTimestamp="2026-02-16 14:55:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:05.568920252 +0000 UTC m=+191.260589291" watchObservedRunningTime="2026-02-16 14:56:05.57123459 +0000 UTC m=+191.262903629" Feb 16 14:56:05 crc kubenswrapper[4748]: I0216 14:56:05.638359 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.087666 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.087749 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.138570 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.479166 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.479608 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.519875 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.555004 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87771814-5361-4ed7-b8bb-6f47b5646a4a","Type":"ContainerStarted","Data":"dfd429886f5dabe28c9b94b834a7b236ac245046e5ec5a4a8161e9dc398ee290"} Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.575692 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=2.575656929 podStartE2EDuration="2.575656929s" podCreationTimestamp="2026-02-16 14:56:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:06.572789048 +0000 UTC m=+192.264458087" watchObservedRunningTime="2026-02-16 14:56:06.575656929 +0000 UTC m=+192.267325978" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.598425 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:56:06 crc kubenswrapper[4748]: I0216 14:56:06.604474 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:56:07 crc kubenswrapper[4748]: I0216 14:56:07.563925 4748 generic.go:334] "Generic (PLEG): container finished" podID="87771814-5361-4ed7-b8bb-6f47b5646a4a" containerID="dfd429886f5dabe28c9b94b834a7b236ac245046e5ec5a4a8161e9dc398ee290" exitCode=0 Feb 16 14:56:07 crc kubenswrapper[4748]: I0216 14:56:07.564255 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87771814-5361-4ed7-b8bb-6f47b5646a4a","Type":"ContainerDied","Data":"dfd429886f5dabe28c9b94b834a7b236ac245046e5ec5a4a8161e9dc398ee290"} Feb 16 14:56:07 crc kubenswrapper[4748]: I0216 14:56:07.590339 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdw4"] Feb 16 14:56:08 crc kubenswrapper[4748]: I0216 14:56:08.907085 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.000947 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87771814-5361-4ed7-b8bb-6f47b5646a4a-kubelet-dir\") pod \"87771814-5361-4ed7-b8bb-6f47b5646a4a\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.001073 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87771814-5361-4ed7-b8bb-6f47b5646a4a-kube-api-access\") pod \"87771814-5361-4ed7-b8bb-6f47b5646a4a\" (UID: \"87771814-5361-4ed7-b8bb-6f47b5646a4a\") " Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.002887 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87771814-5361-4ed7-b8bb-6f47b5646a4a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "87771814-5361-4ed7-b8bb-6f47b5646a4a" (UID: "87771814-5361-4ed7-b8bb-6f47b5646a4a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.015235 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87771814-5361-4ed7-b8bb-6f47b5646a4a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "87771814-5361-4ed7-b8bb-6f47b5646a4a" (UID: "87771814-5361-4ed7-b8bb-6f47b5646a4a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.103209 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/87771814-5361-4ed7-b8bb-6f47b5646a4a-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.103266 4748 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/87771814-5361-4ed7-b8bb-6f47b5646a4a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.579665 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.579740 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"87771814-5361-4ed7-b8bb-6f47b5646a4a","Type":"ContainerDied","Data":"c541cf238ca00c7ff850a102e52c9425bac96606e6a9de6eee641f3efd937f5e"} Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.579844 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c541cf238ca00c7ff850a102e52c9425bac96606e6a9de6eee641f3efd937f5e" Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.580020 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9jdw4" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="registry-server" containerID="cri-o://825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401" gracePeriod=2 Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.792598 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crk29"] Feb 16 14:56:09 crc kubenswrapper[4748]: I0216 14:56:09.792956 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-crk29" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="registry-server" containerID="cri-o://370eaf64925530e00becbfe65574214732480b26783efa70aae344cf01c3c188" gracePeriod=2 Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.420912 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:10 crc kubenswrapper[4748]: E0216 14:56:10.421194 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87771814-5361-4ed7-b8bb-6f47b5646a4a" containerName="pruner" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.421210 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="87771814-5361-4ed7-b8bb-6f47b5646a4a" containerName="pruner" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.421348 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="87771814-5361-4ed7-b8bb-6f47b5646a4a" containerName="pruner" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.421909 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.425049 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.425318 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.446778 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.526125 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-var-lock\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.526683 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kube-api-access\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.526863 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.579200 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.613436 4748 generic.go:334] "Generic (PLEG): container finished" podID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerID="370eaf64925530e00becbfe65574214732480b26783efa70aae344cf01c3c188" exitCode=0 Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.613526 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk29" event={"ID":"9ca47343-f274-4f11-84ed-6fc055cf41f9","Type":"ContainerDied","Data":"370eaf64925530e00becbfe65574214732480b26783efa70aae344cf01c3c188"} Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.615626 4748 generic.go:334] "Generic (PLEG): container finished" podID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerID="825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401" exitCode=0 Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.615655 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdw4" event={"ID":"3c38687a-4ac8-4a60-8e5b-d0a260d6773d","Type":"ContainerDied","Data":"825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401"} Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.615672 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdw4" event={"ID":"3c38687a-4ac8-4a60-8e5b-d0a260d6773d","Type":"ContainerDied","Data":"a55f095ed3c601fd516022140aa8d813fac654a3f3215a9f3ebf6273ee2b7371"} Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.615692 4748 scope.go:117] "RemoveContainer" containerID="825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.615866 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdw4" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.628493 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-var-lock\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.628549 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kube-api-access\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.628573 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.628776 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kubelet-dir\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.628856 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-var-lock\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.647436 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kube-api-access\") pod \"installer-9-crc\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.670373 4748 scope.go:117] "RemoveContainer" containerID="d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.696742 4748 scope.go:117] "RemoveContainer" containerID="0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.722532 4748 scope.go:117] "RemoveContainer" containerID="825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401" Feb 16 14:56:10 crc kubenswrapper[4748]: E0216 14:56:10.723194 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401\": container with ID starting with 825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401 not found: ID does not exist" containerID="825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.723281 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401"} err="failed to get container status \"825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401\": rpc error: code = NotFound desc = could not find container \"825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401\": container with ID starting with 825194eba0da0ca94bb86f2f7e670d8d65ac70697bddbaf6a450048c1eba4401 not found: ID does not exist" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.723354 4748 scope.go:117] "RemoveContainer" containerID="d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894" Feb 16 14:56:10 crc kubenswrapper[4748]: E0216 14:56:10.723925 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894\": container with ID starting with d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894 not found: ID does not exist" containerID="d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.723994 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894"} err="failed to get container status \"d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894\": rpc error: code = NotFound desc = could not find container \"d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894\": container with ID starting with d32778b4686e7a94af4d9f3d2d920c22ce863f0cb53ccfef2f27c43fd4f4e894 not found: ID does not exist" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.724036 4748 scope.go:117] "RemoveContainer" containerID="0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302" Feb 16 14:56:10 crc kubenswrapper[4748]: E0216 14:56:10.724545 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302\": container with ID starting with 0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302 not found: ID does not exist" containerID="0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.724622 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302"} err="failed to get container status \"0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302\": rpc error: code = NotFound desc = could not find container \"0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302\": container with ID starting with 0087a83853a949a22915716dc307732a45224f0429ac800ca706d831d550a302 not found: ID does not exist" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.729523 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-catalog-content\") pod \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.729578 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr797\" (UniqueName: \"kubernetes.io/projected/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-kube-api-access-hr797\") pod \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.729619 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-utilities\") pod \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\" (UID: \"3c38687a-4ac8-4a60-8e5b-d0a260d6773d\") " Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.730676 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-utilities" (OuterVolumeSpecName: "utilities") pod "3c38687a-4ac8-4a60-8e5b-d0a260d6773d" (UID: "3c38687a-4ac8-4a60-8e5b-d0a260d6773d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.746995 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-kube-api-access-hr797" (OuterVolumeSpecName: "kube-api-access-hr797") pod "3c38687a-4ac8-4a60-8e5b-d0a260d6773d" (UID: "3c38687a-4ac8-4a60-8e5b-d0a260d6773d"). InnerVolumeSpecName "kube-api-access-hr797". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.758591 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3c38687a-4ac8-4a60-8e5b-d0a260d6773d" (UID: "3c38687a-4ac8-4a60-8e5b-d0a260d6773d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.764693 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.833517 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.833572 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hr797\" (UniqueName: \"kubernetes.io/projected/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-kube-api-access-hr797\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.833610 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3c38687a-4ac8-4a60-8e5b-d0a260d6773d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.954927 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdw4"] Feb 16 14:56:10 crc kubenswrapper[4748]: I0216 14:56:10.958913 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdw4"] Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.002488 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" path="/var/lib/kubelet/pods/3c38687a-4ac8-4a60-8e5b-d0a260d6773d/volumes" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.045051 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.139666 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkpk\" (UniqueName: \"kubernetes.io/projected/9ca47343-f274-4f11-84ed-6fc055cf41f9-kube-api-access-4wkpk\") pod \"9ca47343-f274-4f11-84ed-6fc055cf41f9\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.139831 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-catalog-content\") pod \"9ca47343-f274-4f11-84ed-6fc055cf41f9\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.139868 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-utilities\") pod \"9ca47343-f274-4f11-84ed-6fc055cf41f9\" (UID: \"9ca47343-f274-4f11-84ed-6fc055cf41f9\") " Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.141155 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-utilities" (OuterVolumeSpecName: "utilities") pod "9ca47343-f274-4f11-84ed-6fc055cf41f9" (UID: "9ca47343-f274-4f11-84ed-6fc055cf41f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.145752 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ca47343-f274-4f11-84ed-6fc055cf41f9-kube-api-access-4wkpk" (OuterVolumeSpecName: "kube-api-access-4wkpk") pod "9ca47343-f274-4f11-84ed-6fc055cf41f9" (UID: "9ca47343-f274-4f11-84ed-6fc055cf41f9"). InnerVolumeSpecName "kube-api-access-4wkpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.195224 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ca47343-f274-4f11-84ed-6fc055cf41f9" (UID: "9ca47343-f274-4f11-84ed-6fc055cf41f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.240947 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkpk\" (UniqueName: \"kubernetes.io/projected/9ca47343-f274-4f11-84ed-6fc055cf41f9-kube-api-access-4wkpk\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.240978 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.240988 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ca47343-f274-4f11-84ed-6fc055cf41f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.254550 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.628348 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-crk29" event={"ID":"9ca47343-f274-4f11-84ed-6fc055cf41f9","Type":"ContainerDied","Data":"46c19eb6dd1084940d74639505e2b39dc763bd35136f22e528bb190dcee932e8"} Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.628402 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-crk29" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.628448 4748 scope.go:117] "RemoveContainer" containerID="370eaf64925530e00becbfe65574214732480b26783efa70aae344cf01c3c188" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.631263 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34d1fe20-ca14-4999-8370-2e8fd245ed7f","Type":"ContainerStarted","Data":"bbb3901eddea35c76ebc0b99353580b1fa204e1d1e39f53d44a343bcc17e4405"} Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.646694 4748 scope.go:117] "RemoveContainer" containerID="c80415744875bf98e3ed468b4e3315be2ecafaa05b5562eae2e127eb6af3c111" Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.665431 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-crk29"] Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.671026 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-crk29"] Feb 16 14:56:11 crc kubenswrapper[4748]: I0216 14:56:11.695130 4748 scope.go:117] "RemoveContainer" containerID="00a19a50a1d00cf047677bb93896ab73e7c32e69675cb6d1bc896c572216137a" Feb 16 14:56:12 crc kubenswrapper[4748]: I0216 14:56:12.641493 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34d1fe20-ca14-4999-8370-2e8fd245ed7f","Type":"ContainerStarted","Data":"481dc11c4204b6ac67447a60ee211a30d42b5dd1f3835bd90260b4ccd4d44f66"} Feb 16 14:56:13 crc kubenswrapper[4748]: I0216 14:56:13.003781 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" path="/var/lib/kubelet/pods/9ca47343-f274-4f11-84ed-6fc055cf41f9/volumes" Feb 16 14:56:13 crc kubenswrapper[4748]: I0216 14:56:13.021844 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.021818093 podStartE2EDuration="3.021818093s" podCreationTimestamp="2026-02-16 14:56:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:12.667142141 +0000 UTC m=+198.358811180" watchObservedRunningTime="2026-02-16 14:56:13.021818093 +0000 UTC m=+198.713487142" Feb 16 14:56:14 crc kubenswrapper[4748]: I0216 14:56:14.087415 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:56:14 crc kubenswrapper[4748]: I0216 14:56:14.125768 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:56:14 crc kubenswrapper[4748]: I0216 14:56:14.481804 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:56:14 crc kubenswrapper[4748]: I0216 14:56:14.660383 4748 generic.go:334] "Generic (PLEG): container finished" podID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerID="1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647" exitCode=0 Feb 16 14:56:14 crc kubenswrapper[4748]: I0216 14:56:14.660475 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpxjm" event={"ID":"690e9611-61fc-40f8-b3a6-706d8b6218f0","Type":"ContainerDied","Data":"1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647"} Feb 16 14:56:14 crc kubenswrapper[4748]: I0216 14:56:14.663664 4748 generic.go:334] "Generic (PLEG): container finished" podID="883993d1-6837-4aef-952c-720e2901efb5" containerID="d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9" exitCode=0 Feb 16 14:56:14 crc kubenswrapper[4748]: I0216 14:56:14.663691 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmd5x" event={"ID":"883993d1-6837-4aef-952c-720e2901efb5","Type":"ContainerDied","Data":"d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9"} Feb 16 14:56:15 crc kubenswrapper[4748]: I0216 14:56:15.674157 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmd5x" event={"ID":"883993d1-6837-4aef-952c-720e2901efb5","Type":"ContainerStarted","Data":"415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19"} Feb 16 14:56:15 crc kubenswrapper[4748]: I0216 14:56:15.677503 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpxjm" event={"ID":"690e9611-61fc-40f8-b3a6-706d8b6218f0","Type":"ContainerStarted","Data":"fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c"} Feb 16 14:56:15 crc kubenswrapper[4748]: I0216 14:56:15.699582 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nmd5x" podStartSLOduration=3.537148447 podStartE2EDuration="49.699554265s" podCreationTimestamp="2026-02-16 14:55:26 +0000 UTC" firstStartedPulling="2026-02-16 14:55:28.907506025 +0000 UTC m=+154.599175064" lastFinishedPulling="2026-02-16 14:56:15.069911813 +0000 UTC m=+200.761580882" observedRunningTime="2026-02-16 14:56:15.696197422 +0000 UTC m=+201.387866481" watchObservedRunningTime="2026-02-16 14:56:15.699554265 +0000 UTC m=+201.391223304" Feb 16 14:56:15 crc kubenswrapper[4748]: I0216 14:56:15.735850 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rpxjm" podStartSLOduration=2.590921107 podStartE2EDuration="48.735814217s" podCreationTimestamp="2026-02-16 14:55:27 +0000 UTC" firstStartedPulling="2026-02-16 14:55:28.934267091 +0000 UTC m=+154.625936130" lastFinishedPulling="2026-02-16 14:56:15.079160201 +0000 UTC m=+200.770829240" observedRunningTime="2026-02-16 14:56:15.727181075 +0000 UTC m=+201.418850114" watchObservedRunningTime="2026-02-16 14:56:15.735814217 +0000 UTC m=+201.427483286" Feb 16 14:56:17 crc kubenswrapper[4748]: I0216 14:56:17.092516 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:56:17 crc kubenswrapper[4748]: I0216 14:56:17.093136 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:56:17 crc kubenswrapper[4748]: I0216 14:56:17.559140 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:56:17 crc kubenswrapper[4748]: I0216 14:56:17.559281 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.141306 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nmd5x" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:18 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:18 crc kubenswrapper[4748]: > Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.186442 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2jmq"] Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.186979 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-w2jmq" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="registry-server" containerID="cri-o://823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09" gracePeriod=2 Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.613108 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rpxjm" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="registry-server" probeResult="failure" output=< Feb 16 14:56:18 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 14:56:18 crc kubenswrapper[4748]: > Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.631393 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.700359 4748 generic.go:334] "Generic (PLEG): container finished" podID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerID="823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09" exitCode=0 Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.700422 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2jmq" event={"ID":"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a","Type":"ContainerDied","Data":"823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09"} Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.700446 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-w2jmq" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.700469 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-w2jmq" event={"ID":"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a","Type":"ContainerDied","Data":"e4fc8fab1b3347724fb94ea27452db0ccaf793c99727cad3141faf458ee6e7cd"} Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.700494 4748 scope.go:117] "RemoveContainer" containerID="823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.717926 4748 scope.go:117] "RemoveContainer" containerID="fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.749821 4748 scope.go:117] "RemoveContainer" containerID="4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.755137 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-catalog-content\") pod \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.755192 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7bc2\" (UniqueName: \"kubernetes.io/projected/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-kube-api-access-k7bc2\") pod \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.756907 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-utilities\") pod \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\" (UID: \"0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a\") " Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.758296 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-utilities" (OuterVolumeSpecName: "utilities") pod "0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" (UID: "0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.770524 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-kube-api-access-k7bc2" (OuterVolumeSpecName: "kube-api-access-k7bc2") pod "0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" (UID: "0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a"). InnerVolumeSpecName "kube-api-access-k7bc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.778279 4748 scope.go:117] "RemoveContainer" containerID="823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09" Feb 16 14:56:18 crc kubenswrapper[4748]: E0216 14:56:18.779260 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09\": container with ID starting with 823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09 not found: ID does not exist" containerID="823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.779311 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09"} err="failed to get container status \"823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09\": rpc error: code = NotFound desc = could not find container \"823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09\": container with ID starting with 823ff8e6c3f40f276305372f3f6eee3d51ccc1262c9baa79b1f18a8facd27a09 not found: ID does not exist" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.779346 4748 scope.go:117] "RemoveContainer" containerID="fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c" Feb 16 14:56:18 crc kubenswrapper[4748]: E0216 14:56:18.779725 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c\": container with ID starting with fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c not found: ID does not exist" containerID="fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.779769 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c"} err="failed to get container status \"fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c\": rpc error: code = NotFound desc = could not find container \"fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c\": container with ID starting with fe17f868b88f124426ee5972123edab0f4d09bf7328b138c768c14f8e5e6383c not found: ID does not exist" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.779792 4748 scope.go:117] "RemoveContainer" containerID="4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6" Feb 16 14:56:18 crc kubenswrapper[4748]: E0216 14:56:18.780018 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6\": container with ID starting with 4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6 not found: ID does not exist" containerID="4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.780056 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6"} err="failed to get container status \"4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6\": rpc error: code = NotFound desc = could not find container \"4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6\": container with ID starting with 4088697a2f40462775a003427aa2667630d47061eac06b94492bb953d69b1cf6 not found: ID does not exist" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.825082 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" (UID: "0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.859083 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.859430 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7bc2\" (UniqueName: \"kubernetes.io/projected/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-kube-api-access-k7bc2\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:18 crc kubenswrapper[4748]: I0216 14:56:18.859497 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.066475 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-w2jmq"] Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.070143 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-w2jmq"] Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.134005 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb"] Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.134381 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" podUID="611a95e5-7828-4f25-9db8-a3827d631750" containerName="controller-manager" containerID="cri-o://5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839" gracePeriod=30 Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.147733 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq"] Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.148541 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" podUID="35f574c3-3516-4548-a7b8-9bceca028664" containerName="route-controller-manager" containerID="cri-o://cb29c369b9667b65fdf0586add8ba2b05fcb0194dd140e740b603511a7178fb7" gracePeriod=30 Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.642918 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.707787 4748 generic.go:334] "Generic (PLEG): container finished" podID="611a95e5-7828-4f25-9db8-a3827d631750" containerID="5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839" exitCode=0 Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.707871 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" event={"ID":"611a95e5-7828-4f25-9db8-a3827d631750","Type":"ContainerDied","Data":"5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839"} Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.707907 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" event={"ID":"611a95e5-7828-4f25-9db8-a3827d631750","Type":"ContainerDied","Data":"f1ead260b4720dd163b5732ea970fcc1a3c07973f480e269b79abbfb755b248f"} Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.707931 4748 scope.go:117] "RemoveContainer" containerID="5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.708077 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.717810 4748 generic.go:334] "Generic (PLEG): container finished" podID="35f574c3-3516-4548-a7b8-9bceca028664" containerID="cb29c369b9667b65fdf0586add8ba2b05fcb0194dd140e740b603511a7178fb7" exitCode=0 Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.717904 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" event={"ID":"35f574c3-3516-4548-a7b8-9bceca028664","Type":"ContainerDied","Data":"cb29c369b9667b65fdf0586add8ba2b05fcb0194dd140e740b603511a7178fb7"} Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.717937 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" event={"ID":"35f574c3-3516-4548-a7b8-9bceca028664","Type":"ContainerDied","Data":"9878f08dd156b98fb2505cad1d38eff40d7c15b7c8d5c1364dae340b8e87520b"} Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.717952 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9878f08dd156b98fb2505cad1d38eff40d7c15b7c8d5c1364dae340b8e87520b" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.726793 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.737852 4748 scope.go:117] "RemoveContainer" containerID="5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839" Feb 16 14:56:19 crc kubenswrapper[4748]: E0216 14:56:19.738391 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839\": container with ID starting with 5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839 not found: ID does not exist" containerID="5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.738483 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839"} err="failed to get container status \"5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839\": rpc error: code = NotFound desc = could not find container \"5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839\": container with ID starting with 5be5ca72c31b697d7dc32949772a145e2f0e317f3c425ff91065fe0f5ddda839 not found: ID does not exist" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.776755 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611a95e5-7828-4f25-9db8-a3827d631750-serving-cert\") pod \"611a95e5-7828-4f25-9db8-a3827d631750\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.776831 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6sgc\" (UniqueName: \"kubernetes.io/projected/611a95e5-7828-4f25-9db8-a3827d631750-kube-api-access-h6sgc\") pod \"611a95e5-7828-4f25-9db8-a3827d631750\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.776865 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-config\") pod \"611a95e5-7828-4f25-9db8-a3827d631750\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.776957 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-client-ca\") pod \"611a95e5-7828-4f25-9db8-a3827d631750\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.777027 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-proxy-ca-bundles\") pod \"611a95e5-7828-4f25-9db8-a3827d631750\" (UID: \"611a95e5-7828-4f25-9db8-a3827d631750\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.778162 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-client-ca" (OuterVolumeSpecName: "client-ca") pod "611a95e5-7828-4f25-9db8-a3827d631750" (UID: "611a95e5-7828-4f25-9db8-a3827d631750"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.779104 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "611a95e5-7828-4f25-9db8-a3827d631750" (UID: "611a95e5-7828-4f25-9db8-a3827d631750"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.779201 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-config" (OuterVolumeSpecName: "config") pod "611a95e5-7828-4f25-9db8-a3827d631750" (UID: "611a95e5-7828-4f25-9db8-a3827d631750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.785226 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611a95e5-7828-4f25-9db8-a3827d631750-kube-api-access-h6sgc" (OuterVolumeSpecName: "kube-api-access-h6sgc") pod "611a95e5-7828-4f25-9db8-a3827d631750" (UID: "611a95e5-7828-4f25-9db8-a3827d631750"). InnerVolumeSpecName "kube-api-access-h6sgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.785286 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/611a95e5-7828-4f25-9db8-a3827d631750-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "611a95e5-7828-4f25-9db8-a3827d631750" (UID: "611a95e5-7828-4f25-9db8-a3827d631750"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.878734 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f574c3-3516-4548-a7b8-9bceca028664-serving-cert\") pod \"35f574c3-3516-4548-a7b8-9bceca028664\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.878809 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-config\") pod \"35f574c3-3516-4548-a7b8-9bceca028664\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.878840 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8tk8\" (UniqueName: \"kubernetes.io/projected/35f574c3-3516-4548-a7b8-9bceca028664-kube-api-access-g8tk8\") pod \"35f574c3-3516-4548-a7b8-9bceca028664\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.878867 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-client-ca\") pod \"35f574c3-3516-4548-a7b8-9bceca028664\" (UID: \"35f574c3-3516-4548-a7b8-9bceca028664\") " Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.879283 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/611a95e5-7828-4f25-9db8-a3827d631750-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.879304 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6sgc\" (UniqueName: \"kubernetes.io/projected/611a95e5-7828-4f25-9db8-a3827d631750-kube-api-access-h6sgc\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.879321 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.879332 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.879343 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/611a95e5-7828-4f25-9db8-a3827d631750-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.879614 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-config" (OuterVolumeSpecName: "config") pod "35f574c3-3516-4548-a7b8-9bceca028664" (UID: "35f574c3-3516-4548-a7b8-9bceca028664"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.879929 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-client-ca" (OuterVolumeSpecName: "client-ca") pod "35f574c3-3516-4548-a7b8-9bceca028664" (UID: "35f574c3-3516-4548-a7b8-9bceca028664"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.882161 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f574c3-3516-4548-a7b8-9bceca028664-kube-api-access-g8tk8" (OuterVolumeSpecName: "kube-api-access-g8tk8") pod "35f574c3-3516-4548-a7b8-9bceca028664" (UID: "35f574c3-3516-4548-a7b8-9bceca028664"). InnerVolumeSpecName "kube-api-access-g8tk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.882436 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35f574c3-3516-4548-a7b8-9bceca028664-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "35f574c3-3516-4548-a7b8-9bceca028664" (UID: "35f574c3-3516-4548-a7b8-9bceca028664"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.981154 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35f574c3-3516-4548-a7b8-9bceca028664-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.981223 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.981233 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/35f574c3-3516-4548-a7b8-9bceca028664-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:19 crc kubenswrapper[4748]: I0216 14:56:19.981248 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8tk8\" (UniqueName: \"kubernetes.io/projected/35f574c3-3516-4548-a7b8-9bceca028664-kube-api-access-g8tk8\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.034578 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb"] Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.038022 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c8f9c4b96-wmvxb"] Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704197 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm"] Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704534 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="extract-utilities" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704554 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="extract-utilities" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704570 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="extract-content" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704579 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="extract-content" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704589 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704597 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704609 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="extract-utilities" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704617 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="extract-utilities" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704629 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="extract-utilities" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704636 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="extract-utilities" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704648 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704656 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704670 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704678 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704691 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="extract-content" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704700 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="extract-content" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704733 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611a95e5-7828-4f25-9db8-a3827d631750" containerName="controller-manager" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704742 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="611a95e5-7828-4f25-9db8-a3827d631750" containerName="controller-manager" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704751 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f574c3-3516-4548-a7b8-9bceca028664" containerName="route-controller-manager" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704762 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f574c3-3516-4548-a7b8-9bceca028664" containerName="route-controller-manager" Feb 16 14:56:20 crc kubenswrapper[4748]: E0216 14:56:20.704773 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="extract-content" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704780 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="extract-content" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704933 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704946 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c38687a-4ac8-4a60-8e5b-d0a260d6773d" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704960 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ca47343-f274-4f11-84ed-6fc055cf41f9" containerName="registry-server" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704977 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="611a95e5-7828-4f25-9db8-a3827d631750" containerName="controller-manager" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.704985 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f574c3-3516-4548-a7b8-9bceca028664" containerName="route-controller-manager" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.705556 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.707735 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-59f965dd57-k5jt7"] Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.708971 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.712197 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.712566 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.712815 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.713095 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.713249 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.713366 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.720986 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.726043 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59f965dd57-k5jt7"] Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.730911 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.730927 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm"] Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.784542 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq"] Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.787650 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6dfcdbc84b-qf2pq"] Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892269 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5j6\" (UniqueName: \"kubernetes.io/projected/04f4723e-39c4-4094-88c1-7e134269587d-kube-api-access-ct5j6\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892311 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f4723e-39c4-4094-88c1-7e134269587d-serving-cert\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892335 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-proxy-ca-bundles\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892421 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-serving-cert\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892468 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgntk\" (UniqueName: \"kubernetes.io/projected/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-kube-api-access-bgntk\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892497 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-config\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892518 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-client-ca\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892541 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-client-ca\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.892566 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-config\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.993374 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-serving-cert\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.993932 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgntk\" (UniqueName: \"kubernetes.io/projected/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-kube-api-access-bgntk\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.993966 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-config\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.993995 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-client-ca\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.994032 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-client-ca\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.994066 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-config\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.994121 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct5j6\" (UniqueName: \"kubernetes.io/projected/04f4723e-39c4-4094-88c1-7e134269587d-kube-api-access-ct5j6\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.994152 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-proxy-ca-bundles\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.994177 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f4723e-39c4-4094-88c1-7e134269587d-serving-cert\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.995246 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-client-ca\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.995455 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-config\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.995500 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-client-ca\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.995651 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-config\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.995874 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-proxy-ca-bundles\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.998582 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f4723e-39c4-4094-88c1-7e134269587d-serving-cert\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:20 crc kubenswrapper[4748]: I0216 14:56:20.998604 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-serving-cert\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.017154 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgntk\" (UniqueName: \"kubernetes.io/projected/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-kube-api-access-bgntk\") pod \"controller-manager-59f965dd57-k5jt7\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.017161 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct5j6\" (UniqueName: \"kubernetes.io/projected/04f4723e-39c4-4094-88c1-7e134269587d-kube-api-access-ct5j6\") pod \"route-controller-manager-5ff6bb6789-drwnm\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.019707 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a" path="/var/lib/kubelet/pods/0bcfd5e9-4fd0-4df6-b9f1-d50138f65a9a/volumes" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.020628 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f574c3-3516-4548-a7b8-9bceca028664" path="/var/lib/kubelet/pods/35f574c3-3516-4548-a7b8-9bceca028664/volumes" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.021429 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611a95e5-7828-4f25-9db8-a3827d631750" path="/var/lib/kubelet/pods/611a95e5-7828-4f25-9db8-a3827d631750/volumes" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.033152 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.057899 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.346622 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-59f965dd57-k5jt7"] Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.404283 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm"] Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.737436 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" event={"ID":"5d11a380-e4a7-44fa-9ae9-ae190ea127bd","Type":"ContainerStarted","Data":"e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a"} Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.737843 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.737861 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" event={"ID":"5d11a380-e4a7-44fa-9ae9-ae190ea127bd","Type":"ContainerStarted","Data":"04e92cce43684aa2a6a8fe381768cadcb52a0e0dbd54b680a2d379fe05fcb494"} Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.739326 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" event={"ID":"04f4723e-39c4-4094-88c1-7e134269587d","Type":"ContainerStarted","Data":"2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339"} Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.739357 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" event={"ID":"04f4723e-39c4-4094-88c1-7e134269587d","Type":"ContainerStarted","Data":"231396e57a63df4bad8ba93fc4e5aef944c61c65afcfe3a8c1042498f3a438d6"} Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.739529 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.763253 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.771963 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" podStartSLOduration=2.771943657 podStartE2EDuration="2.771943657s" podCreationTimestamp="2026-02-16 14:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:21.769273112 +0000 UTC m=+207.460942161" watchObservedRunningTime="2026-02-16 14:56:21.771943657 +0000 UTC m=+207.463612706" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.792005 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" podStartSLOduration=2.791978961 podStartE2EDuration="2.791978961s" podCreationTimestamp="2026-02-16 14:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:21.790363471 +0000 UTC m=+207.482032530" watchObservedRunningTime="2026-02-16 14:56:21.791978961 +0000 UTC m=+207.483648000" Feb 16 14:56:21 crc kubenswrapper[4748]: I0216 14:56:21.917343 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:27 crc kubenswrapper[4748]: I0216 14:56:27.160104 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:56:27 crc kubenswrapper[4748]: I0216 14:56:27.225658 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:56:27 crc kubenswrapper[4748]: I0216 14:56:27.624175 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:56:27 crc kubenswrapper[4748]: I0216 14:56:27.669664 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:56:28 crc kubenswrapper[4748]: I0216 14:56:28.365521 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rpxjm"] Feb 16 14:56:28 crc kubenswrapper[4748]: I0216 14:56:28.793834 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rpxjm" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="registry-server" containerID="cri-o://fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c" gracePeriod=2 Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.774450 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.800798 4748 generic.go:334] "Generic (PLEG): container finished" podID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerID="fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c" exitCode=0 Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.800843 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpxjm" event={"ID":"690e9611-61fc-40f8-b3a6-706d8b6218f0","Type":"ContainerDied","Data":"fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c"} Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.800872 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rpxjm" event={"ID":"690e9611-61fc-40f8-b3a6-706d8b6218f0","Type":"ContainerDied","Data":"995891c9dc7cceff51608e03c10204432efb14291fe79cb09ac331aee78ac68e"} Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.800892 4748 scope.go:117] "RemoveContainer" containerID="fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.801021 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rpxjm" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.816546 4748 scope.go:117] "RemoveContainer" containerID="1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.837254 4748 scope.go:117] "RemoveContainer" containerID="227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.856947 4748 scope.go:117] "RemoveContainer" containerID="fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c" Feb 16 14:56:29 crc kubenswrapper[4748]: E0216 14:56:29.857641 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c\": container with ID starting with fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c not found: ID does not exist" containerID="fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.857705 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c"} err="failed to get container status \"fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c\": rpc error: code = NotFound desc = could not find container \"fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c\": container with ID starting with fcb5ef7c8aca6f4d8bdca7b48071d3026ff56509514da2f145070c1216d43c6c not found: ID does not exist" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.857871 4748 scope.go:117] "RemoveContainer" containerID="1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647" Feb 16 14:56:29 crc kubenswrapper[4748]: E0216 14:56:29.858279 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647\": container with ID starting with 1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647 not found: ID does not exist" containerID="1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.858351 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647"} err="failed to get container status \"1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647\": rpc error: code = NotFound desc = could not find container \"1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647\": container with ID starting with 1ad2e3700a5980f4e3a750f5f1f22bada115d8bea6f47b5331114bb03af14647 not found: ID does not exist" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.858403 4748 scope.go:117] "RemoveContainer" containerID="227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad" Feb 16 14:56:29 crc kubenswrapper[4748]: E0216 14:56:29.859010 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad\": container with ID starting with 227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad not found: ID does not exist" containerID="227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.859044 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad"} err="failed to get container status \"227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad\": rpc error: code = NotFound desc = could not find container \"227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad\": container with ID starting with 227f59c67e9936d223d96fea348b92c9b7c2f094d6d46daf38f758e563892aad not found: ID does not exist" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.926260 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-utilities\") pod \"690e9611-61fc-40f8-b3a6-706d8b6218f0\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.926341 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-catalog-content\") pod \"690e9611-61fc-40f8-b3a6-706d8b6218f0\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.926389 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkr6h\" (UniqueName: \"kubernetes.io/projected/690e9611-61fc-40f8-b3a6-706d8b6218f0-kube-api-access-zkr6h\") pod \"690e9611-61fc-40f8-b3a6-706d8b6218f0\" (UID: \"690e9611-61fc-40f8-b3a6-706d8b6218f0\") " Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.927201 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-utilities" (OuterVolumeSpecName: "utilities") pod "690e9611-61fc-40f8-b3a6-706d8b6218f0" (UID: "690e9611-61fc-40f8-b3a6-706d8b6218f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:29 crc kubenswrapper[4748]: I0216 14:56:29.932342 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/690e9611-61fc-40f8-b3a6-706d8b6218f0-kube-api-access-zkr6h" (OuterVolumeSpecName: "kube-api-access-zkr6h") pod "690e9611-61fc-40f8-b3a6-706d8b6218f0" (UID: "690e9611-61fc-40f8-b3a6-706d8b6218f0"). InnerVolumeSpecName "kube-api-access-zkr6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.028284 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.028324 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkr6h\" (UniqueName: \"kubernetes.io/projected/690e9611-61fc-40f8-b3a6-706d8b6218f0-kube-api-access-zkr6h\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.057253 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "690e9611-61fc-40f8-b3a6-706d8b6218f0" (UID: "690e9611-61fc-40f8-b3a6-706d8b6218f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.130365 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/690e9611-61fc-40f8-b3a6-706d8b6218f0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.135444 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rpxjm"] Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.137850 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rpxjm"] Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.548444 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" podUID="aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" containerName="oauth-openshift" containerID="cri-o://79a60fd10a132d21ff531eb614f810cf3d7b335708d2cdcffb92046ed6742bc4" gracePeriod=15 Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.810983 4748 generic.go:334] "Generic (PLEG): container finished" podID="aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" containerID="79a60fd10a132d21ff531eb614f810cf3d7b335708d2cdcffb92046ed6742bc4" exitCode=0 Feb 16 14:56:30 crc kubenswrapper[4748]: I0216 14:56:30.811091 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" event={"ID":"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae","Type":"ContainerDied","Data":"79a60fd10a132d21ff531eb614f810cf3d7b335708d2cdcffb92046ed6742bc4"} Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.007641 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" path="/var/lib/kubelet/pods/690e9611-61fc-40f8-b3a6-706d8b6218f0/volumes" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.030487 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.143361 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-trusted-ca-bundle\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.144064 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5kl6\" (UniqueName: \"kubernetes.io/projected/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-kube-api-access-l5kl6\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.144333 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-error\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.144693 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-dir\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.144946 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-router-certs\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.145438 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.145816 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.145927 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-ocp-branding-template\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.146441 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-session\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.146621 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-login\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.146956 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-policies\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.147190 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-service-ca\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.147356 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-serving-cert\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.147517 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-idp-0-file-data\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.147789 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-provider-selection\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.148058 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-cliconfig\") pod \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\" (UID: \"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae\") " Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.148070 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.148094 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.149331 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.151180 4748 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.151554 4748 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.151656 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.151767 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.156830 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.157895 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.157692 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.158863 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-kube-api-access-l5kl6" (OuterVolumeSpecName: "kube-api-access-l5kl6") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "kube-api-access-l5kl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.159353 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.160761 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.161357 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.163087 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.163875 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" (UID: "aff3886f-9ddb-47c3-8a3b-3db0a8df51ae"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253403 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l5kl6\" (UniqueName: \"kubernetes.io/projected/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-kube-api-access-l5kl6\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253469 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253492 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253516 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253537 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253556 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253576 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253662 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253685 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.253707 4748 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.821152 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" event={"ID":"aff3886f-9ddb-47c3-8a3b-3db0a8df51ae","Type":"ContainerDied","Data":"fe889d88e498002833cdd03d3a7be2c070472db662ee52d68ac595f4dbaead1f"} Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.821247 4748 scope.go:117] "RemoveContainer" containerID="79a60fd10a132d21ff531eb614f810cf3d7b335708d2cdcffb92046ed6742bc4" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.822121 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-2bw7n" Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.852204 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bw7n"] Feb 16 14:56:31 crc kubenswrapper[4748]: I0216 14:56:31.858909 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-2bw7n"] Feb 16 14:56:33 crc kubenswrapper[4748]: I0216 14:56:33.008832 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" path="/var/lib/kubelet/pods/aff3886f-9ddb-47c3-8a3b-3db0a8df51ae/volumes" Feb 16 14:56:34 crc kubenswrapper[4748]: I0216 14:56:34.730524 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:56:34 crc kubenswrapper[4748]: I0216 14:56:34.731060 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:56:34 crc kubenswrapper[4748]: I0216 14:56:34.731113 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 14:56:34 crc kubenswrapper[4748]: I0216 14:56:34.731818 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 14:56:34 crc kubenswrapper[4748]: I0216 14:56:34.731884 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750" gracePeriod=600 Feb 16 14:56:35 crc kubenswrapper[4748]: I0216 14:56:35.853186 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750" exitCode=0 Feb 16 14:56:35 crc kubenswrapper[4748]: I0216 14:56:35.853276 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750"} Feb 16 14:56:35 crc kubenswrapper[4748]: I0216 14:56:35.854088 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"1f548aebb058ac34928d579a653cfc879fc0a3301fa7df0272936a4d6928bf41"} Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.715040 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-55c7db9594-rpxsb"] Feb 16 14:56:36 crc kubenswrapper[4748]: E0216 14:56:36.715767 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" containerName="oauth-openshift" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.715780 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" containerName="oauth-openshift" Feb 16 14:56:36 crc kubenswrapper[4748]: E0216 14:56:36.715793 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="registry-server" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.715800 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="registry-server" Feb 16 14:56:36 crc kubenswrapper[4748]: E0216 14:56:36.715812 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="extract-content" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.715819 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="extract-content" Feb 16 14:56:36 crc kubenswrapper[4748]: E0216 14:56:36.715829 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="extract-utilities" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.715834 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="extract-utilities" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.715935 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="690e9611-61fc-40f8-b3a6-706d8b6218f0" containerName="registry-server" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.715951 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="aff3886f-9ddb-47c3-8a3b-3db0a8df51ae" containerName="oauth-openshift" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.716472 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.720352 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.720562 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.721778 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.721975 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.722111 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.724299 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.724424 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.725081 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.725382 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.725550 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.725775 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.725822 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.736879 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.744239 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.748122 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.748158 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7db9594-rpxsb"] Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828628 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-audit-dir\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828691 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828742 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828787 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z55ll\" (UniqueName: \"kubernetes.io/projected/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-kube-api-access-z55ll\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828812 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828839 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-audit-policies\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828861 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.828977 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.829050 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.829115 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.829199 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.829228 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.829268 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.829345 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-session\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931487 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-audit-dir\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931574 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931621 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931679 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-audit-dir\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931692 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931808 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z55ll\" (UniqueName: \"kubernetes.io/projected/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-kube-api-access-z55ll\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931859 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-audit-policies\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931882 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931918 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931940 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.931995 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.932089 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.932118 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.932184 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.932223 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-session\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.932978 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-service-ca\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.933218 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-audit-policies\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.933335 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.934187 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.940398 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-router-certs\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.940425 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.940598 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.940777 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.940823 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-error\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.941172 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-session\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.941345 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-user-template-login\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.941575 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:36 crc kubenswrapper[4748]: I0216 14:56:36.952948 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z55ll\" (UniqueName: \"kubernetes.io/projected/a2886196-d7b8-457d-9fd2-99b9e5e2b17d-kube-api-access-z55ll\") pod \"oauth-openshift-55c7db9594-rpxsb\" (UID: \"a2886196-d7b8-457d-9fd2-99b9e5e2b17d\") " pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:37 crc kubenswrapper[4748]: I0216 14:56:37.035075 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:37 crc kubenswrapper[4748]: I0216 14:56:37.496077 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-55c7db9594-rpxsb"] Feb 16 14:56:37 crc kubenswrapper[4748]: I0216 14:56:37.874328 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" event={"ID":"a2886196-d7b8-457d-9fd2-99b9e5e2b17d","Type":"ContainerStarted","Data":"f429276968a0bf7c1ba4e64a05c0a723ba8927ab1845e7ad83f2bb61525d0a7b"} Feb 16 14:56:37 crc kubenswrapper[4748]: I0216 14:56:37.874848 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" event={"ID":"a2886196-d7b8-457d-9fd2-99b9e5e2b17d","Type":"ContainerStarted","Data":"12528c65f877abed1bad1d00d1a240b1419c4508b7db0e4090421dd1691d138b"} Feb 16 14:56:37 crc kubenswrapper[4748]: I0216 14:56:37.875114 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:37 crc kubenswrapper[4748]: I0216 14:56:37.877830 4748 patch_prober.go:28] interesting pod/oauth-openshift-55c7db9594-rpxsb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" start-of-body= Feb 16 14:56:37 crc kubenswrapper[4748]: I0216 14:56:37.877924 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" podUID="a2886196-d7b8-457d-9fd2-99b9e5e2b17d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.63:6443/healthz\": dial tcp 10.217.0.63:6443: connect: connection refused" Feb 16 14:56:38 crc kubenswrapper[4748]: I0216 14:56:38.890967 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" Feb 16 14:56:38 crc kubenswrapper[4748]: I0216 14:56:38.915956 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-55c7db9594-rpxsb" podStartSLOduration=33.915928678 podStartE2EDuration="33.915928678s" podCreationTimestamp="2026-02-16 14:56:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:37.910179848 +0000 UTC m=+223.601849007" watchObservedRunningTime="2026-02-16 14:56:38.915928678 +0000 UTC m=+224.607597717" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.141905 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59f965dd57-k5jt7"] Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.142167 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" podUID="5d11a380-e4a7-44fa-9ae9-ae190ea127bd" containerName="controller-manager" containerID="cri-o://e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a" gracePeriod=30 Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.226289 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm"] Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.226603 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" podUID="04f4723e-39c4-4094-88c1-7e134269587d" containerName="route-controller-manager" containerID="cri-o://2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339" gracePeriod=30 Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.721347 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.813243 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.880208 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct5j6\" (UniqueName: \"kubernetes.io/projected/04f4723e-39c4-4094-88c1-7e134269587d-kube-api-access-ct5j6\") pod \"04f4723e-39c4-4094-88c1-7e134269587d\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.880310 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-client-ca\") pod \"04f4723e-39c4-4094-88c1-7e134269587d\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.880374 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f4723e-39c4-4094-88c1-7e134269587d-serving-cert\") pod \"04f4723e-39c4-4094-88c1-7e134269587d\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.880418 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-config\") pod \"04f4723e-39c4-4094-88c1-7e134269587d\" (UID: \"04f4723e-39c4-4094-88c1-7e134269587d\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.881298 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-config" (OuterVolumeSpecName: "config") pod "04f4723e-39c4-4094-88c1-7e134269587d" (UID: "04f4723e-39c4-4094-88c1-7e134269587d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.881367 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-client-ca" (OuterVolumeSpecName: "client-ca") pod "04f4723e-39c4-4094-88c1-7e134269587d" (UID: "04f4723e-39c4-4094-88c1-7e134269587d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.886114 4748 generic.go:334] "Generic (PLEG): container finished" podID="04f4723e-39c4-4094-88c1-7e134269587d" containerID="2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339" exitCode=0 Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.886327 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04f4723e-39c4-4094-88c1-7e134269587d-kube-api-access-ct5j6" (OuterVolumeSpecName: "kube-api-access-ct5j6") pod "04f4723e-39c4-4094-88c1-7e134269587d" (UID: "04f4723e-39c4-4094-88c1-7e134269587d"). InnerVolumeSpecName "kube-api-access-ct5j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.886467 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" event={"ID":"04f4723e-39c4-4094-88c1-7e134269587d","Type":"ContainerDied","Data":"2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339"} Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.886492 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04f4723e-39c4-4094-88c1-7e134269587d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "04f4723e-39c4-4094-88c1-7e134269587d" (UID: "04f4723e-39c4-4094-88c1-7e134269587d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.886528 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" event={"ID":"04f4723e-39c4-4094-88c1-7e134269587d","Type":"ContainerDied","Data":"231396e57a63df4bad8ba93fc4e5aef944c61c65afcfe3a8c1042498f3a438d6"} Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.886526 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.886550 4748 scope.go:117] "RemoveContainer" containerID="2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.889971 4748 generic.go:334] "Generic (PLEG): container finished" podID="5d11a380-e4a7-44fa-9ae9-ae190ea127bd" containerID="e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a" exitCode=0 Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.891338 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.891644 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" event={"ID":"5d11a380-e4a7-44fa-9ae9-ae190ea127bd","Type":"ContainerDied","Data":"e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a"} Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.891668 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-59f965dd57-k5jt7" event={"ID":"5d11a380-e4a7-44fa-9ae9-ae190ea127bd","Type":"ContainerDied","Data":"04e92cce43684aa2a6a8fe381768cadcb52a0e0dbd54b680a2d379fe05fcb494"} Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.913884 4748 scope.go:117] "RemoveContainer" containerID="2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339" Feb 16 14:56:39 crc kubenswrapper[4748]: E0216 14:56:39.914981 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339\": container with ID starting with 2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339 not found: ID does not exist" containerID="2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.915033 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339"} err="failed to get container status \"2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339\": rpc error: code = NotFound desc = could not find container \"2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339\": container with ID starting with 2b4fa0c9f9793880c8af0a32541bf126fd8a1d5664be68167cb37bfa398ce339 not found: ID does not exist" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.915068 4748 scope.go:117] "RemoveContainer" containerID="e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.917251 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm"] Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.920498 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5ff6bb6789-drwnm"] Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.934977 4748 scope.go:117] "RemoveContainer" containerID="e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a" Feb 16 14:56:39 crc kubenswrapper[4748]: E0216 14:56:39.935490 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a\": container with ID starting with e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a not found: ID does not exist" containerID="e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.935595 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a"} err="failed to get container status \"e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a\": rpc error: code = NotFound desc = could not find container \"e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a\": container with ID starting with e4e56eb2cdb673258ea5fb9db8d53181aff639608266784381dd5eababa80b6a not found: ID does not exist" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981068 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgntk\" (UniqueName: \"kubernetes.io/projected/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-kube-api-access-bgntk\") pod \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981128 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-serving-cert\") pod \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981178 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-config\") pod \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981214 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-proxy-ca-bundles\") pod \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981281 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-client-ca\") pod \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\" (UID: \"5d11a380-e4a7-44fa-9ae9-ae190ea127bd\") " Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981607 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981621 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/04f4723e-39c4-4094-88c1-7e134269587d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981629 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/04f4723e-39c4-4094-88c1-7e134269587d-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.981638 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct5j6\" (UniqueName: \"kubernetes.io/projected/04f4723e-39c4-4094-88c1-7e134269587d-kube-api-access-ct5j6\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.982933 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5d11a380-e4a7-44fa-9ae9-ae190ea127bd" (UID: "5d11a380-e4a7-44fa-9ae9-ae190ea127bd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.983130 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-config" (OuterVolumeSpecName: "config") pod "5d11a380-e4a7-44fa-9ae9-ae190ea127bd" (UID: "5d11a380-e4a7-44fa-9ae9-ae190ea127bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.983296 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-client-ca" (OuterVolumeSpecName: "client-ca") pod "5d11a380-e4a7-44fa-9ae9-ae190ea127bd" (UID: "5d11a380-e4a7-44fa-9ae9-ae190ea127bd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.987740 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-kube-api-access-bgntk" (OuterVolumeSpecName: "kube-api-access-bgntk") pod "5d11a380-e4a7-44fa-9ae9-ae190ea127bd" (UID: "5d11a380-e4a7-44fa-9ae9-ae190ea127bd"). InnerVolumeSpecName "kube-api-access-bgntk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:39 crc kubenswrapper[4748]: I0216 14:56:39.989018 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5d11a380-e4a7-44fa-9ae9-ae190ea127bd" (UID: "5d11a380-e4a7-44fa-9ae9-ae190ea127bd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.083001 4748 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.083321 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgntk\" (UniqueName: \"kubernetes.io/projected/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-kube-api-access-bgntk\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.083482 4748 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.083606 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-config\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.083786 4748 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5d11a380-e4a7-44fa-9ae9-ae190ea127bd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.249791 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-59f965dd57-k5jt7"] Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.253242 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-59f965dd57-k5jt7"] Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.718580 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-68765457f9-n5tq4"] Feb 16 14:56:40 crc kubenswrapper[4748]: E0216 14:56:40.718925 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d11a380-e4a7-44fa-9ae9-ae190ea127bd" containerName="controller-manager" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.718940 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d11a380-e4a7-44fa-9ae9-ae190ea127bd" containerName="controller-manager" Feb 16 14:56:40 crc kubenswrapper[4748]: E0216 14:56:40.718953 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04f4723e-39c4-4094-88c1-7e134269587d" containerName="route-controller-manager" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.718959 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="04f4723e-39c4-4094-88c1-7e134269587d" containerName="route-controller-manager" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.719051 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="04f4723e-39c4-4094-88c1-7e134269587d" containerName="route-controller-manager" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.719064 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d11a380-e4a7-44fa-9ae9-ae190ea127bd" containerName="controller-manager" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.719566 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.724400 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb"] Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.725262 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.726374 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.726593 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.726702 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.726853 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.726886 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.727239 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.729153 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.729742 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.729827 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.730653 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.730868 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.733967 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.735310 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68765457f9-n5tq4"] Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.742368 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.743560 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb"] Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.895863 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-serving-cert\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.895970 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-config\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.896072 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542e017c-f005-45e2-b493-a749c2ec400d-serving-cert\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.896185 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp8tl\" (UniqueName: \"kubernetes.io/projected/542e017c-f005-45e2-b493-a749c2ec400d-kube-api-access-rp8tl\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.896252 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk22d\" (UniqueName: \"kubernetes.io/projected/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-kube-api-access-wk22d\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.896318 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-config\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.896380 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-client-ca\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.896417 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-proxy-ca-bundles\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.896483 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-client-ca\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.997277 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk22d\" (UniqueName: \"kubernetes.io/projected/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-kube-api-access-wk22d\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.998810 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-config\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.998905 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-client-ca\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.998951 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-proxy-ca-bundles\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.998985 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-client-ca\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.999057 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-serving-cert\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.999117 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-config\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.999134 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-config\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.999160 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542e017c-f005-45e2-b493-a749c2ec400d-serving-cert\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:40 crc kubenswrapper[4748]: I0216 14:56:40.999224 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp8tl\" (UniqueName: \"kubernetes.io/projected/542e017c-f005-45e2-b493-a749c2ec400d-kube-api-access-rp8tl\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.000223 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-client-ca\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.000376 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-client-ca\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.000771 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-proxy-ca-bundles\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.002073 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/542e017c-f005-45e2-b493-a749c2ec400d-config\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.003826 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/542e017c-f005-45e2-b493-a749c2ec400d-serving-cert\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.006160 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-serving-cert\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.009787 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04f4723e-39c4-4094-88c1-7e134269587d" path="/var/lib/kubelet/pods/04f4723e-39c4-4094-88c1-7e134269587d/volumes" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.013337 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d11a380-e4a7-44fa-9ae9-ae190ea127bd" path="/var/lib/kubelet/pods/5d11a380-e4a7-44fa-9ae9-ae190ea127bd/volumes" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.016419 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk22d\" (UniqueName: \"kubernetes.io/projected/8216fba9-8e4a-4b4e-bb53-849c8dd0d61a-kube-api-access-wk22d\") pod \"route-controller-manager-69c76697d-9mtdb\" (UID: \"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a\") " pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.017001 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp8tl\" (UniqueName: \"kubernetes.io/projected/542e017c-f005-45e2-b493-a749c2ec400d-kube-api-access-rp8tl\") pod \"controller-manager-68765457f9-n5tq4\" (UID: \"542e017c-f005-45e2-b493-a749c2ec400d\") " pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.041377 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.063125 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.482076 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-68765457f9-n5tq4"] Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.584087 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb"] Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.909581 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" event={"ID":"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a","Type":"ContainerStarted","Data":"6debcad823aad984a998d0831645a0fce8e3ecc7df1a180c749813f7731a8baa"} Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.910115 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.910136 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" event={"ID":"8216fba9-8e4a-4b4e-bb53-849c8dd0d61a","Type":"ContainerStarted","Data":"b0bc01645c8c0357e8648cb1ea11b9af5c75f47ab12c50d9afac6bd8c9ca1a61"} Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.911112 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" event={"ID":"542e017c-f005-45e2-b493-a749c2ec400d","Type":"ContainerStarted","Data":"7ec4f6ef0bad6a321bd477c40af2c696da67c1c17b696bf648e988b4b69491e9"} Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.911162 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" event={"ID":"542e017c-f005-45e2-b493-a749c2ec400d","Type":"ContainerStarted","Data":"7c01b00b5d3849967a45cafff639f26cad11e53d809242828ac9e4d4ff9bc409"} Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.911836 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.923379 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.935564 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" podStartSLOduration=2.935533346 podStartE2EDuration="2.935533346s" podCreationTimestamp="2026-02-16 14:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:41.930892862 +0000 UTC m=+227.622561901" watchObservedRunningTime="2026-02-16 14:56:41.935533346 +0000 UTC m=+227.627202385" Feb 16 14:56:41 crc kubenswrapper[4748]: I0216 14:56:41.952700 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-68765457f9-n5tq4" podStartSLOduration=2.952658358 podStartE2EDuration="2.952658358s" podCreationTimestamp="2026-02-16 14:56:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:56:41.949411188 +0000 UTC m=+227.641080227" watchObservedRunningTime="2026-02-16 14:56:41.952658358 +0000 UTC m=+227.644327397" Feb 16 14:56:42 crc kubenswrapper[4748]: I0216 14:56:42.266172 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69c76697d-9mtdb" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.496750 4748 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.498073 4748 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.498350 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da" gracePeriod=15 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.498405 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83" gracePeriod=15 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.498440 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.498471 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538" gracePeriod=15 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.498434 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e" gracePeriod=15 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.498552 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65" gracePeriod=15 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500273 4748 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.500556 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500577 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.500640 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500650 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.500663 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500671 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.500682 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500688 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.500699 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500707 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.500742 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500750 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.500762 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500769 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500895 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500914 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500926 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500934 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500942 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.500949 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.536873 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543582 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543663 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543692 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543791 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543818 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543843 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543866 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.543910 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.644931 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645005 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645079 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645115 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645153 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645195 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645232 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645291 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645425 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645490 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645493 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645527 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645546 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645566 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645647 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.645705 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.835355 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:56:49 crc kubenswrapper[4748]: E0216 14:56:49.876087 4748 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c1f4a07c3af8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:56:49.872124664 +0000 UTC m=+235.563793703,LastTimestamp:2026-02-16 14:56:49.872124664 +0000 UTC m=+235.563793703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.975362 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"7d0c0ef6a80b932f6e0760ef7ef76cc98093440da6c46df1cb4eacc61c8ad7ce"} Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.985238 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.987943 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.990519 4748 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e" exitCode=0 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.990552 4748 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538" exitCode=0 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.990561 4748 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83" exitCode=0 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.990571 4748 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65" exitCode=2 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.990625 4748 scope.go:117] "RemoveContainer" containerID="6503100dc4e9e66b8241015c7a68184f8e27c48f193edfd9a963ca8066c13a55" Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.998541 4748 generic.go:334] "Generic (PLEG): container finished" podID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" containerID="481dc11c4204b6ac67447a60ee211a30d42b5dd1f3835bd90260b4ccd4d44f66" exitCode=0 Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.998591 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34d1fe20-ca14-4999-8370-2e8fd245ed7f","Type":"ContainerDied","Data":"481dc11c4204b6ac67447a60ee211a30d42b5dd1f3835bd90260b4ccd4d44f66"} Feb 16 14:56:49 crc kubenswrapper[4748]: I0216 14:56:49.999440 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:50 crc kubenswrapper[4748]: I0216 14:56:50.000034 4748 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:50 crc kubenswrapper[4748]: I0216 14:56:50.000372 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.005786 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48"} Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.006539 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.006824 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.009772 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.421506 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.422413 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.422586 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.572847 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kubelet-dir\") pod \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.572909 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kube-api-access\") pod \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.573013 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "34d1fe20-ca14-4999-8370-2e8fd245ed7f" (UID: "34d1fe20-ca14-4999-8370-2e8fd245ed7f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.573091 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-var-lock\") pod \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\" (UID: \"34d1fe20-ca14-4999-8370-2e8fd245ed7f\") " Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.573318 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-var-lock" (OuterVolumeSpecName: "var-lock") pod "34d1fe20-ca14-4999-8370-2e8fd245ed7f" (UID: "34d1fe20-ca14-4999-8370-2e8fd245ed7f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.574098 4748 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.574137 4748 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/34d1fe20-ca14-4999-8370-2e8fd245ed7f-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.607157 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "34d1fe20-ca14-4999-8370-2e8fd245ed7f" (UID: "34d1fe20-ca14-4999-8370-2e8fd245ed7f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.675860 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/34d1fe20-ca14-4999-8370-2e8fd245ed7f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.880230 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.881274 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.882082 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.882836 4748 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:51 crc kubenswrapper[4748]: I0216 14:56:51.883490 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.018745 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"34d1fe20-ca14-4999-8370-2e8fd245ed7f","Type":"ContainerDied","Data":"bbb3901eddea35c76ebc0b99353580b1fa204e1d1e39f53d44a343bcc17e4405"} Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.018822 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb3901eddea35c76ebc0b99353580b1fa204e1d1e39f53d44a343bcc17e4405" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.018840 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.023991 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.025362 4748 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da" exitCode=0 Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.025481 4748 scope.go:117] "RemoveContainer" containerID="dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.025589 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.044151 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.045099 4748 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.045655 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.060064 4748 scope.go:117] "RemoveContainer" containerID="bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.081121 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.081228 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.081396 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.081427 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.081529 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.081772 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.082643 4748 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.082681 4748 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.082700 4748 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.086020 4748 scope.go:117] "RemoveContainer" containerID="97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.112090 4748 scope.go:117] "RemoveContainer" containerID="595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.133962 4748 scope.go:117] "RemoveContainer" containerID="c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.159000 4748 scope.go:117] "RemoveContainer" containerID="fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.187979 4748 scope.go:117] "RemoveContainer" containerID="dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e" Feb 16 14:56:52 crc kubenswrapper[4748]: E0216 14:56:52.188549 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\": container with ID starting with dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e not found: ID does not exist" containerID="dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.188593 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e"} err="failed to get container status \"dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\": rpc error: code = NotFound desc = could not find container \"dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e\": container with ID starting with dac05d77900fc400b15cf16d990928404abbb4d6485a22505e664fffcb141e2e not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.188634 4748 scope.go:117] "RemoveContainer" containerID="bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538" Feb 16 14:56:52 crc kubenswrapper[4748]: E0216 14:56:52.189107 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\": container with ID starting with bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538 not found: ID does not exist" containerID="bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.189139 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538"} err="failed to get container status \"bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\": rpc error: code = NotFound desc = could not find container \"bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538\": container with ID starting with bf224d131f9722cf6c6f1e7c4892cc5c08e07e2bd82da2549325626adae0b538 not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.189158 4748 scope.go:117] "RemoveContainer" containerID="97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83" Feb 16 14:56:52 crc kubenswrapper[4748]: E0216 14:56:52.189707 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\": container with ID starting with 97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83 not found: ID does not exist" containerID="97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.189753 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83"} err="failed to get container status \"97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\": rpc error: code = NotFound desc = could not find container \"97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83\": container with ID starting with 97deb7b13e335364db0b92d603da1bd49871ebe190e37dc7ba0f6d22e43c0a83 not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.189774 4748 scope.go:117] "RemoveContainer" containerID="595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65" Feb 16 14:56:52 crc kubenswrapper[4748]: E0216 14:56:52.190215 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\": container with ID starting with 595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65 not found: ID does not exist" containerID="595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.190244 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65"} err="failed to get container status \"595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\": rpc error: code = NotFound desc = could not find container \"595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65\": container with ID starting with 595a3c3e4bd5643681be300a57d3aeaa3674551c163f4195f5a183fd36a87d65 not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.190263 4748 scope.go:117] "RemoveContainer" containerID="c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da" Feb 16 14:56:52 crc kubenswrapper[4748]: E0216 14:56:52.190846 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\": container with ID starting with c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da not found: ID does not exist" containerID="c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.190878 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da"} err="failed to get container status \"c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\": rpc error: code = NotFound desc = could not find container \"c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da\": container with ID starting with c50c3abe56d38ea03561d8ffff25ced8137d126f76081ba5c028670411a647da not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.190894 4748 scope.go:117] "RemoveContainer" containerID="fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756" Feb 16 14:56:52 crc kubenswrapper[4748]: E0216 14:56:52.191360 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\": container with ID starting with fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756 not found: ID does not exist" containerID="fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.191436 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756"} err="failed to get container status \"fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\": rpc error: code = NotFound desc = could not find container \"fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756\": container with ID starting with fdf23096d8d32c124fd6ebc79f935f35447153585f5021914b74f3b3b946b756 not found: ID does not exist" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.341490 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.342095 4748 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:52 crc kubenswrapper[4748]: I0216 14:56:52.342514 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:53 crc kubenswrapper[4748]: I0216 14:56:53.003604 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.233622 4748 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.234699 4748 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.235446 4748 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.236069 4748 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.236615 4748 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:54 crc kubenswrapper[4748]: I0216 14:56:54.236673 4748 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.237078 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="200ms" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.437886 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="400ms" Feb 16 14:56:54 crc kubenswrapper[4748]: E0216 14:56:54.839958 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="800ms" Feb 16 14:56:54 crc kubenswrapper[4748]: I0216 14:56:54.996879 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:54 crc kubenswrapper[4748]: I0216 14:56:54.997342 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:56:55 crc kubenswrapper[4748]: E0216 14:56:55.378316 4748 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.51:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.1894c1f4a07c3af8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 14:56:49.872124664 +0000 UTC m=+235.563793703,LastTimestamp:2026-02-16 14:56:49.872124664 +0000 UTC m=+235.563793703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 14:56:55 crc kubenswrapper[4748]: E0216 14:56:55.640491 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="1.6s" Feb 16 14:56:57 crc kubenswrapper[4748]: E0216 14:56:57.241975 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="3.2s" Feb 16 14:57:00 crc kubenswrapper[4748]: E0216 14:57:00.443588 4748 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.51:6443: connect: connection refused" interval="6.4s" Feb 16 14:57:01 crc kubenswrapper[4748]: I0216 14:57:01.994285 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:01 crc kubenswrapper[4748]: I0216 14:57:01.995592 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:57:01 crc kubenswrapper[4748]: I0216 14:57:01.996320 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:57:02 crc kubenswrapper[4748]: I0216 14:57:02.019103 4748 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:02 crc kubenswrapper[4748]: I0216 14:57:02.019159 4748 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:02 crc kubenswrapper[4748]: E0216 14:57:02.019822 4748 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:02 crc kubenswrapper[4748]: I0216 14:57:02.020550 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:02 crc kubenswrapper[4748]: I0216 14:57:02.109948 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b65b139b50d6edf3d1f574d4c910c335ce85ac25725d4acaa0c79a4762710458"} Feb 16 14:57:03 crc kubenswrapper[4748]: I0216 14:57:03.118500 4748 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f5fa6a1d3c34a944264df154296d9252d393f3d8a6eae923ba7dbc48c1af7118" exitCode=0 Feb 16 14:57:03 crc kubenswrapper[4748]: I0216 14:57:03.118832 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f5fa6a1d3c34a944264df154296d9252d393f3d8a6eae923ba7dbc48c1af7118"} Feb 16 14:57:03 crc kubenswrapper[4748]: I0216 14:57:03.119341 4748 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:03 crc kubenswrapper[4748]: I0216 14:57:03.119388 4748 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:03 crc kubenswrapper[4748]: E0216 14:57:03.120162 4748 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:03 crc kubenswrapper[4748]: I0216 14:57:03.120237 4748 status_manager.go:851] "Failed to get status for pod" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:57:03 crc kubenswrapper[4748]: I0216 14:57:03.121217 4748 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.51:6443: connect: connection refused" Feb 16 14:57:04 crc kubenswrapper[4748]: I0216 14:57:04.131508 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"613e9b8bc2c55fd75f3dad8c70d819b0a4d5a722faf72dd21af1678ccc7ec8d2"} Feb 16 14:57:04 crc kubenswrapper[4748]: I0216 14:57:04.131902 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"02beca9677de6d107f76030c944a5c3dc7578f3e1fcdf6fcf847619b3180c705"} Feb 16 14:57:04 crc kubenswrapper[4748]: I0216 14:57:04.131913 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8797d3ae76d006e168f658d90c9229b175ace4ff214d5e661e89c9cdfeef3a93"} Feb 16 14:57:04 crc kubenswrapper[4748]: I0216 14:57:04.850500 4748 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Feb 16 14:57:04 crc kubenswrapper[4748]: I0216 14:57:04.850592 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.140916 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"af6454d4e31d7d55713bfbe0b4240d1562d36ef2ccd0d2681b282125130c5537"} Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.140973 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3d51c94c613216460f6aa3260b01a0e9936e4a66d36f029bbce8f5523fc7edf7"} Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.141055 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.141182 4748 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.141202 4748 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.145035 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.145105 4748 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7" exitCode=1 Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.145156 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7"} Feb 16 14:57:05 crc kubenswrapper[4748]: I0216 14:57:05.145798 4748 scope.go:117] "RemoveContainer" containerID="7ae1260b55048cf3950082b5ef232fe9fc49008ea6565b4f8f75afd0e7463de7" Feb 16 14:57:06 crc kubenswrapper[4748]: I0216 14:57:06.154445 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 16 14:57:06 crc kubenswrapper[4748]: I0216 14:57:06.154997 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3be59582d65dc7a8041f8912e079f3af35d5b0f1926261d92c9a4aad0294a2e5"} Feb 16 14:57:07 crc kubenswrapper[4748]: I0216 14:57:07.021107 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:07 crc kubenswrapper[4748]: I0216 14:57:07.021164 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:07 crc kubenswrapper[4748]: I0216 14:57:07.029948 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:10 crc kubenswrapper[4748]: I0216 14:57:10.294475 4748 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:10 crc kubenswrapper[4748]: I0216 14:57:10.352121 4748 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b79b32de-6a0c-4f98-ac2c-d62c9441d075" Feb 16 14:57:11 crc kubenswrapper[4748]: I0216 14:57:11.193443 4748 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:11 crc kubenswrapper[4748]: I0216 14:57:11.193509 4748 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="622ff7e0-621f-44e9-8cc3-a2a9ef9f5f28" Feb 16 14:57:11 crc kubenswrapper[4748]: I0216 14:57:11.200621 4748 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b79b32de-6a0c-4f98-ac2c-d62c9441d075" Feb 16 14:57:12 crc kubenswrapper[4748]: I0216 14:57:12.534860 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:12 crc kubenswrapper[4748]: I0216 14:57:12.539903 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:13 crc kubenswrapper[4748]: I0216 14:57:13.206972 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:20 crc kubenswrapper[4748]: I0216 14:57:20.443817 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 16 14:57:20 crc kubenswrapper[4748]: I0216 14:57:20.680939 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 16 14:57:20 crc kubenswrapper[4748]: I0216 14:57:20.741456 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 16 14:57:20 crc kubenswrapper[4748]: I0216 14:57:20.759310 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 16 14:57:21 crc kubenswrapper[4748]: I0216 14:57:21.057624 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 16 14:57:21 crc kubenswrapper[4748]: I0216 14:57:21.493410 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 16 14:57:21 crc kubenswrapper[4748]: I0216 14:57:21.602018 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 16 14:57:21 crc kubenswrapper[4748]: I0216 14:57:21.965681 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 16 14:57:21 crc kubenswrapper[4748]: I0216 14:57:21.982010 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 16 14:57:21 crc kubenswrapper[4748]: I0216 14:57:21.991428 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 16 14:57:22 crc kubenswrapper[4748]: I0216 14:57:22.146260 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 16 14:57:22 crc kubenswrapper[4748]: I0216 14:57:22.542804 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 16 14:57:22 crc kubenswrapper[4748]: I0216 14:57:22.550377 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 16 14:57:22 crc kubenswrapper[4748]: I0216 14:57:22.603032 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 16 14:57:22 crc kubenswrapper[4748]: I0216 14:57:22.807380 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 16 14:57:22 crc kubenswrapper[4748]: I0216 14:57:22.928791 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.174397 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.361112 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.465992 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.504593 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.539902 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.563370 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.662513 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.798377 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 16 14:57:23 crc kubenswrapper[4748]: I0216 14:57:23.835871 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.026308 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.027084 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.107855 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.144947 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.339339 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.457813 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.635289 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.674501 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.695873 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.855893 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.897746 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 16 14:57:24 crc kubenswrapper[4748]: I0216 14:57:24.937882 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.052507 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.073002 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.130486 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.203214 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.234023 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.242107 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.257709 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.281748 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.284625 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.443940 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.502338 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.506270 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.518695 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.562789 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.563634 4748 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.621701 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.633357 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.635618 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.635808 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.682617 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.732025 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.732369 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.765295 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.767992 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.780378 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 16 14:57:25 crc kubenswrapper[4748]: I0216 14:57:25.990871 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.016102 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.078133 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.163412 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.177701 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.233110 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.330924 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.411501 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.420298 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.481882 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.490626 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.565778 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.664188 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.709997 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.750705 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.774441 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.866268 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.868520 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.873129 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:57:26 crc kubenswrapper[4748]: I0216 14:57:26.884207 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.044291 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.112973 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.269530 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.272905 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.275287 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.313823 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.325029 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.370797 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.481048 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.521295 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.719122 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.814328 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.844307 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.906407 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.915912 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.933741 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.994741 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.995535 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 16 14:57:27 crc kubenswrapper[4748]: I0216 14:57:27.996020 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.008317 4748 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.065544 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.134829 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.146462 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.230286 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.340875 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.343456 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.356598 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.389222 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.398191 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.503847 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.521891 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.549925 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.566795 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.638495 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.666221 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.737144 4748 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.739456 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.740329 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=39.740310777 podStartE2EDuration="39.740310777s" podCreationTimestamp="2026-02-16 14:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:10.348594464 +0000 UTC m=+256.040263523" watchObservedRunningTime="2026-02-16 14:57:28.740310777 +0000 UTC m=+274.431979806" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.741570 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.741627 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.746381 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.764390 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=18.764367767 podStartE2EDuration="18.764367767s" podCreationTimestamp="2026-02-16 14:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:57:28.763531246 +0000 UTC m=+274.455200285" watchObservedRunningTime="2026-02-16 14:57:28.764367767 +0000 UTC m=+274.456036806" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.789407 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.845938 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.923599 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 16 14:57:28 crc kubenswrapper[4748]: I0216 14:57:28.995733 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.012961 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.147054 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.316200 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.360018 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.360581 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.481087 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.526788 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.645674 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.656342 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.660570 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.870685 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.914489 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.928489 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 16 14:57:29 crc kubenswrapper[4748]: I0216 14:57:29.980594 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.037336 4748 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.266147 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.278961 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.337425 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.363325 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.415613 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.433767 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.464476 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.486289 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.522006 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.523985 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.542930 4748 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.675370 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.678386 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.833156 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.966664 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 16 14:57:30 crc kubenswrapper[4748]: I0216 14:57:30.982397 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.132077 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.151970 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.157522 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.227145 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.280918 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.299457 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.322676 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.332903 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.350881 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.485682 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.490121 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.590102 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.627463 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.630171 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.708040 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.709863 4748 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.710277 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48" gracePeriod=5 Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.741437 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.762029 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.783667 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.795822 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.855210 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.889196 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.958386 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.974608 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 16 14:57:31 crc kubenswrapper[4748]: I0216 14:57:31.976545 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.169311 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.171306 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.207218 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.305275 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.391554 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.422940 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.708779 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.710996 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.791565 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.887622 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.894377 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.908363 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.940733 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.956316 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 16 14:57:32 crc kubenswrapper[4748]: I0216 14:57:32.976556 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.015215 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.058026 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.149270 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.253199 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.286219 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.318197 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.434874 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.462420 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.513032 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.514077 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.539791 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.609291 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.609334 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.709869 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.784021 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.794912 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.796806 4748 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.817889 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.858804 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.872701 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.936202 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 16 14:57:33 crc kubenswrapper[4748]: I0216 14:57:33.944652 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.143244 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.215163 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.224450 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.227311 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.260776 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.303923 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.305912 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.317894 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.466822 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.626076 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.841673 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.926228 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 16 14:57:34 crc kubenswrapper[4748]: I0216 14:57:34.963464 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.004899 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.039115 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.155511 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.166303 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.341482 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.457815 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.589045 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.634971 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.706654 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.797342 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 16 14:57:35 crc kubenswrapper[4748]: I0216 14:57:35.980670 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.082942 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.131584 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.153792 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.282201 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.284152 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.318702 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.395175 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.409967 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.430917 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.438667 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 16 14:57:36 crc kubenswrapper[4748]: I0216 14:57:36.891255 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.033933 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.304462 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.304553 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.390912 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.390989 4748 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48" exitCode=137 Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.391051 4748 scope.go:117] "RemoveContainer" containerID="1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.391096 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.406047 4748 scope.go:117] "RemoveContainer" containerID="1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48" Feb 16 14:57:37 crc kubenswrapper[4748]: E0216 14:57:37.406515 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48\": container with ID starting with 1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48 not found: ID does not exist" containerID="1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.406568 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48"} err="failed to get container status \"1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48\": rpc error: code = NotFound desc = could not find container \"1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48\": container with ID starting with 1b2b98a228b0c9587870b5df9678f62f13cfdc7c25013c2d6459113e3b840b48 not found: ID does not exist" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470215 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470273 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470333 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470367 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470409 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470453 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470416 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470487 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.470588 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.472559 4748 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.472618 4748 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.472651 4748 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.480147 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.573921 4748 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.573972 4748 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.737697 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 16 14:57:37 crc kubenswrapper[4748]: I0216 14:57:37.801921 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 16 14:57:38 crc kubenswrapper[4748]: I0216 14:57:38.301746 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 16 14:57:39 crc kubenswrapper[4748]: I0216 14:57:39.031572 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 16 14:57:39 crc kubenswrapper[4748]: I0216 14:57:39.032570 4748 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 16 14:57:39 crc kubenswrapper[4748]: I0216 14:57:39.054893 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:57:39 crc kubenswrapper[4748]: I0216 14:57:39.054984 4748 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8de69855-a12f-44d5-8388-bca1474ec78e" Feb 16 14:57:39 crc kubenswrapper[4748]: I0216 14:57:39.070565 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 14:57:39 crc kubenswrapper[4748]: I0216 14:57:39.070622 4748 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="8de69855-a12f-44d5-8388-bca1474ec78e" Feb 16 14:57:54 crc kubenswrapper[4748]: I0216 14:57:54.752445 4748 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Feb 16 14:57:55 crc kubenswrapper[4748]: I0216 14:57:55.509609 4748 generic.go:334] "Generic (PLEG): container finished" podID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerID="29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b" exitCode=0 Feb 16 14:57:55 crc kubenswrapper[4748]: I0216 14:57:55.509684 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" event={"ID":"34a812e6-7d17-4c40-a9ab-376ef6ab5001","Type":"ContainerDied","Data":"29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b"} Feb 16 14:57:55 crc kubenswrapper[4748]: I0216 14:57:55.510628 4748 scope.go:117] "RemoveContainer" containerID="29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b" Feb 16 14:57:56 crc kubenswrapper[4748]: I0216 14:57:56.519934 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" event={"ID":"34a812e6-7d17-4c40-a9ab-376ef6ab5001","Type":"ContainerStarted","Data":"1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187"} Feb 16 14:57:56 crc kubenswrapper[4748]: I0216 14:57:56.520906 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:57:56 crc kubenswrapper[4748]: I0216 14:57:56.524228 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:59:04 crc kubenswrapper[4748]: I0216 14:59:04.729988 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:59:04 crc kubenswrapper[4748]: I0216 14:59:04.730646 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.541160 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jgm9r"] Feb 16 14:59:05 crc kubenswrapper[4748]: E0216 14:59:05.541594 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.541620 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:59:05 crc kubenswrapper[4748]: E0216 14:59:05.541649 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" containerName="installer" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.541662 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" containerName="installer" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.541898 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.541933 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d1fe20-ca14-4999-8370-2e8fd245ed7f" containerName="installer" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.542841 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.559341 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jgm9r"] Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.715862 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-registry-certificates\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.715930 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.716224 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.716322 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tfw6\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-kube-api-access-5tfw6\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.716383 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-trusted-ca\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.716480 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.716539 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-registry-tls\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.716585 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.750135 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.818595 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-registry-tls\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.818677 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.818805 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-registry-certificates\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.818843 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.818913 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.818972 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tfw6\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-kube-api-access-5tfw6\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.819016 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-trusted-ca\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.819498 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.820333 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-registry-certificates\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.821303 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-trusted-ca\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.829617 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-registry-tls\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.829744 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.849126 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.851888 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tfw6\" (UniqueName: \"kubernetes.io/projected/f7e40c93-cbca-48c9-b3e1-28509b5fa1b3-kube-api-access-5tfw6\") pod \"image-registry-66df7c8f76-jgm9r\" (UID: \"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:05 crc kubenswrapper[4748]: I0216 14:59:05.868163 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:06 crc kubenswrapper[4748]: I0216 14:59:06.389997 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jgm9r"] Feb 16 14:59:06 crc kubenswrapper[4748]: W0216 14:59:06.396193 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7e40c93_cbca_48c9_b3e1_28509b5fa1b3.slice/crio-e35ee5adc8722e96f033d202a908653625410c263513f389035b4fb08c074f49 WatchSource:0}: Error finding container e35ee5adc8722e96f033d202a908653625410c263513f389035b4fb08c074f49: Status 404 returned error can't find the container with id e35ee5adc8722e96f033d202a908653625410c263513f389035b4fb08c074f49 Feb 16 14:59:07 crc kubenswrapper[4748]: I0216 14:59:07.006618 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" event={"ID":"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3","Type":"ContainerStarted","Data":"06239330c66b997f59d9e360b137b854d3e4b1e4370ca1d943509fec905f4f95"} Feb 16 14:59:07 crc kubenswrapper[4748]: I0216 14:59:07.006687 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" event={"ID":"f7e40c93-cbca-48c9-b3e1-28509b5fa1b3","Type":"ContainerStarted","Data":"e35ee5adc8722e96f033d202a908653625410c263513f389035b4fb08c074f49"} Feb 16 14:59:07 crc kubenswrapper[4748]: I0216 14:59:07.006768 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:07 crc kubenswrapper[4748]: I0216 14:59:07.029446 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" podStartSLOduration=2.029419533 podStartE2EDuration="2.029419533s" podCreationTimestamp="2026-02-16 14:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:59:07.028225104 +0000 UTC m=+372.719894153" watchObservedRunningTime="2026-02-16 14:59:07.029419533 +0000 UTC m=+372.721088582" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.563099 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9vdw"] Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.564193 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-s9vdw" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="registry-server" containerID="cri-o://75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673" gracePeriod=30 Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.569213 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hlms"] Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.569507 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2hlms" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="registry-server" containerID="cri-o://ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886" gracePeriod=30 Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.583064 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qm9b7"] Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.583313 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" containerID="cri-o://1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187" gracePeriod=30 Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.593517 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2jwt"] Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.594110 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p2jwt" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="registry-server" containerID="cri-o://ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7" gracePeriod=30 Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.612808 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nb8t"] Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.613882 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.617626 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmd5x"] Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.618528 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nmd5x" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="registry-server" containerID="cri-o://415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19" gracePeriod=30 Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.645032 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nb8t"] Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.704534 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc2l8\" (UniqueName: \"kubernetes.io/projected/379d499a-4ed6-4e79-ae35-e934ca28ab85-kube-api-access-rc2l8\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.704608 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/379d499a-4ed6-4e79-ae35-e934ca28ab85-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.704645 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/379d499a-4ed6-4e79-ae35-e934ca28ab85-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.806548 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc2l8\" (UniqueName: \"kubernetes.io/projected/379d499a-4ed6-4e79-ae35-e934ca28ab85-kube-api-access-rc2l8\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.806622 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/379d499a-4ed6-4e79-ae35-e934ca28ab85-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.806654 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/379d499a-4ed6-4e79-ae35-e934ca28ab85-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.807766 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/379d499a-4ed6-4e79-ae35-e934ca28ab85-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.825881 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/379d499a-4ed6-4e79-ae35-e934ca28ab85-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.842692 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc2l8\" (UniqueName: \"kubernetes.io/projected/379d499a-4ed6-4e79-ae35-e934ca28ab85-kube-api-access-rc2l8\") pod \"marketplace-operator-79b997595-6nb8t\" (UID: \"379d499a-4ed6-4e79-ae35-e934ca28ab85\") " pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:24 crc kubenswrapper[4748]: I0216 14:59:24.939974 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.016988 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.023487 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.028209 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.087497 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.102864 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.110784 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-utilities\") pod \"712cdaea-3348-4e68-8761-842d520bedc6\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.111411 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-utilities" (OuterVolumeSpecName: "utilities") pod "712cdaea-3348-4e68-8761-842d520bedc6" (UID: "712cdaea-3348-4e68-8761-842d520bedc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.114230 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-catalog-content\") pod \"712cdaea-3348-4e68-8761-842d520bedc6\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.114277 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxnsk\" (UniqueName: \"kubernetes.io/projected/34a812e6-7d17-4c40-a9ab-376ef6ab5001-kube-api-access-qxnsk\") pod \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.114352 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-utilities\") pod \"a62cdd4f-90c2-45c7-893d-35356124bf3c\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.114425 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-trusted-ca\") pod \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.114466 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7xt7\" (UniqueName: \"kubernetes.io/projected/a62cdd4f-90c2-45c7-893d-35356124bf3c-kube-api-access-n7xt7\") pod \"a62cdd4f-90c2-45c7-893d-35356124bf3c\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.114490 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-catalog-content\") pod \"a62cdd4f-90c2-45c7-893d-35356124bf3c\" (UID: \"a62cdd4f-90c2-45c7-893d-35356124bf3c\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.114997 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-operator-metrics\") pod \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\" (UID: \"34a812e6-7d17-4c40-a9ab-376ef6ab5001\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.115086 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc5js\" (UniqueName: \"kubernetes.io/projected/712cdaea-3348-4e68-8761-842d520bedc6-kube-api-access-rc5js\") pod \"712cdaea-3348-4e68-8761-842d520bedc6\" (UID: \"712cdaea-3348-4e68-8761-842d520bedc6\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.116523 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "34a812e6-7d17-4c40-a9ab-376ef6ab5001" (UID: "34a812e6-7d17-4c40-a9ab-376ef6ab5001"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.116889 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.116908 4748 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.121569 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-utilities" (OuterVolumeSpecName: "utilities") pod "a62cdd4f-90c2-45c7-893d-35356124bf3c" (UID: "a62cdd4f-90c2-45c7-893d-35356124bf3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.121908 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712cdaea-3348-4e68-8761-842d520bedc6-kube-api-access-rc5js" (OuterVolumeSpecName: "kube-api-access-rc5js") pod "712cdaea-3348-4e68-8761-842d520bedc6" (UID: "712cdaea-3348-4e68-8761-842d520bedc6"). InnerVolumeSpecName "kube-api-access-rc5js". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.122176 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "34a812e6-7d17-4c40-a9ab-376ef6ab5001" (UID: "34a812e6-7d17-4c40-a9ab-376ef6ab5001"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.122518 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a62cdd4f-90c2-45c7-893d-35356124bf3c-kube-api-access-n7xt7" (OuterVolumeSpecName: "kube-api-access-n7xt7") pod "a62cdd4f-90c2-45c7-893d-35356124bf3c" (UID: "a62cdd4f-90c2-45c7-893d-35356124bf3c"). InnerVolumeSpecName "kube-api-access-n7xt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.129823 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2jwt" event={"ID":"a62cdd4f-90c2-45c7-893d-35356124bf3c","Type":"ContainerDied","Data":"ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.129923 4748 scope.go:117] "RemoveContainer" containerID="ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.129846 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2jwt" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.129768 4748 generic.go:334] "Generic (PLEG): container finished" podID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerID="ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7" exitCode=0 Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.130737 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2jwt" event={"ID":"a62cdd4f-90c2-45c7-893d-35356124bf3c","Type":"ContainerDied","Data":"fe757ed1c421fef133e961f7dba6b9733d8132ba76bed2cfb03eddc59b5a88da"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.136343 4748 generic.go:334] "Generic (PLEG): container finished" podID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerID="1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187" exitCode=0 Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.136405 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" event={"ID":"34a812e6-7d17-4c40-a9ab-376ef6ab5001","Type":"ContainerDied","Data":"1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.136438 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" event={"ID":"34a812e6-7d17-4c40-a9ab-376ef6ab5001","Type":"ContainerDied","Data":"f6e133777689d2cbd0a01de8d1f51aae02d8ac82aaa3ae06d9ea36aa13e1fce4"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.136491 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-qm9b7" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.139490 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a812e6-7d17-4c40-a9ab-376ef6ab5001-kube-api-access-qxnsk" (OuterVolumeSpecName: "kube-api-access-qxnsk") pod "34a812e6-7d17-4c40-a9ab-376ef6ab5001" (UID: "34a812e6-7d17-4c40-a9ab-376ef6ab5001"). InnerVolumeSpecName "kube-api-access-qxnsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.142254 4748 generic.go:334] "Generic (PLEG): container finished" podID="712cdaea-3348-4e68-8761-842d520bedc6" containerID="75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673" exitCode=0 Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.142387 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9vdw" event={"ID":"712cdaea-3348-4e68-8761-842d520bedc6","Type":"ContainerDied","Data":"75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.142426 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s9vdw" event={"ID":"712cdaea-3348-4e68-8761-842d520bedc6","Type":"ContainerDied","Data":"52348d50cb3eb3c29f641d8999e75d21021525b7a47cd13cbce22d439ac34d0f"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.143454 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s9vdw" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.151124 4748 generic.go:334] "Generic (PLEG): container finished" podID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerID="ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886" exitCode=0 Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.151192 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hlms" event={"ID":"3807d8da-105e-4dbb-8446-81b6d4a2ae05","Type":"ContainerDied","Data":"ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.151220 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2hlms" event={"ID":"3807d8da-105e-4dbb-8446-81b6d4a2ae05","Type":"ContainerDied","Data":"c7ec1f6149fbd95886f298acec833da143ab202ffb558bd634030fb120e921a9"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.151288 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2hlms" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.154902 4748 scope.go:117] "RemoveContainer" containerID="68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.166028 4748 generic.go:334] "Generic (PLEG): container finished" podID="883993d1-6837-4aef-952c-720e2901efb5" containerID="415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19" exitCode=0 Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.166074 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmd5x" event={"ID":"883993d1-6837-4aef-952c-720e2901efb5","Type":"ContainerDied","Data":"415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.166102 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nmd5x" event={"ID":"883993d1-6837-4aef-952c-720e2901efb5","Type":"ContainerDied","Data":"934a6933cf9ec534faf6de321bfcceca4676402d9bd245616cead4c8e65efce6"} Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.166137 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nmd5x" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.183912 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "712cdaea-3348-4e68-8761-842d520bedc6" (UID: "712cdaea-3348-4e68-8761-842d520bedc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.184099 4748 scope.go:117] "RemoveContainer" containerID="741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.185036 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a62cdd4f-90c2-45c7-893d-35356124bf3c" (UID: "a62cdd4f-90c2-45c7-893d-35356124bf3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.211923 4748 scope.go:117] "RemoveContainer" containerID="ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.213454 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7\": container with ID starting with ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7 not found: ID does not exist" containerID="ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.213509 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7"} err="failed to get container status \"ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7\": rpc error: code = NotFound desc = could not find container \"ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7\": container with ID starting with ccd33b66332103437e3a75703387ec552ddff90eb271d607917b890d88d3b0b7 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.213535 4748 scope.go:117] "RemoveContainer" containerID="68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.213827 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24\": container with ID starting with 68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24 not found: ID does not exist" containerID="68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.213843 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24"} err="failed to get container status \"68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24\": rpc error: code = NotFound desc = could not find container \"68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24\": container with ID starting with 68f8212000c9dc78501f1a7fa961c8cf7cf2e57ae0e0278f51f03e52c2b24a24 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.213855 4748 scope.go:117] "RemoveContainer" containerID="741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.214414 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f\": container with ID starting with 741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f not found: ID does not exist" containerID="741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.214429 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f"} err="failed to get container status \"741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f\": rpc error: code = NotFound desc = could not find container \"741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f\": container with ID starting with 741f11faffedc3d341d38f205067638dc0bc7299ee855465b62519a86a50049f not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.214439 4748 scope.go:117] "RemoveContainer" containerID="1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.222967 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-catalog-content\") pod \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223004 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-utilities\") pod \"883993d1-6837-4aef-952c-720e2901efb5\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223104 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrg62\" (UniqueName: \"kubernetes.io/projected/883993d1-6837-4aef-952c-720e2901efb5-kube-api-access-nrg62\") pod \"883993d1-6837-4aef-952c-720e2901efb5\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223137 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4sq2\" (UniqueName: \"kubernetes.io/projected/3807d8da-105e-4dbb-8446-81b6d4a2ae05-kube-api-access-z4sq2\") pod \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223158 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-catalog-content\") pod \"883993d1-6837-4aef-952c-720e2901efb5\" (UID: \"883993d1-6837-4aef-952c-720e2901efb5\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223229 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-utilities\") pod \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\" (UID: \"3807d8da-105e-4dbb-8446-81b6d4a2ae05\") " Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223442 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7xt7\" (UniqueName: \"kubernetes.io/projected/a62cdd4f-90c2-45c7-893d-35356124bf3c-kube-api-access-n7xt7\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223454 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223463 4748 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/34a812e6-7d17-4c40-a9ab-376ef6ab5001-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223472 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc5js\" (UniqueName: \"kubernetes.io/projected/712cdaea-3348-4e68-8761-842d520bedc6-kube-api-access-rc5js\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223481 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/712cdaea-3348-4e68-8761-842d520bedc6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223489 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxnsk\" (UniqueName: \"kubernetes.io/projected/34a812e6-7d17-4c40-a9ab-376ef6ab5001-kube-api-access-qxnsk\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.223498 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a62cdd4f-90c2-45c7-893d-35356124bf3c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.224232 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-utilities" (OuterVolumeSpecName: "utilities") pod "3807d8da-105e-4dbb-8446-81b6d4a2ae05" (UID: "3807d8da-105e-4dbb-8446-81b6d4a2ae05"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.224839 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-utilities" (OuterVolumeSpecName: "utilities") pod "883993d1-6837-4aef-952c-720e2901efb5" (UID: "883993d1-6837-4aef-952c-720e2901efb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.227347 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3807d8da-105e-4dbb-8446-81b6d4a2ae05-kube-api-access-z4sq2" (OuterVolumeSpecName: "kube-api-access-z4sq2") pod "3807d8da-105e-4dbb-8446-81b6d4a2ae05" (UID: "3807d8da-105e-4dbb-8446-81b6d4a2ae05"). InnerVolumeSpecName "kube-api-access-z4sq2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.227988 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/883993d1-6837-4aef-952c-720e2901efb5-kube-api-access-nrg62" (OuterVolumeSpecName: "kube-api-access-nrg62") pod "883993d1-6837-4aef-952c-720e2901efb5" (UID: "883993d1-6837-4aef-952c-720e2901efb5"). InnerVolumeSpecName "kube-api-access-nrg62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.238186 4748 scope.go:117] "RemoveContainer" containerID="29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.257110 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6nb8t"] Feb 16 14:59:25 crc kubenswrapper[4748]: W0216 14:59:25.257490 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod379d499a_4ed6_4e79_ae35_e934ca28ab85.slice/crio-b6d8fbc80f9f7bd3fabeed899869fa9e91e654364a90ad55d0daf84893b4dbe7 WatchSource:0}: Error finding container b6d8fbc80f9f7bd3fabeed899869fa9e91e654364a90ad55d0daf84893b4dbe7: Status 404 returned error can't find the container with id b6d8fbc80f9f7bd3fabeed899869fa9e91e654364a90ad55d0daf84893b4dbe7 Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.274223 4748 scope.go:117] "RemoveContainer" containerID="1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.275647 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187\": container with ID starting with 1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187 not found: ID does not exist" containerID="1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.275692 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187"} err="failed to get container status \"1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187\": rpc error: code = NotFound desc = could not find container \"1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187\": container with ID starting with 1f6864cab1fd17f8cf85d4b40547aac8ec0a22a8dd01f779237e28907aa96187 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.275732 4748 scope.go:117] "RemoveContainer" containerID="29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.276214 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b\": container with ID starting with 29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b not found: ID does not exist" containerID="29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.276238 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b"} err="failed to get container status \"29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b\": rpc error: code = NotFound desc = could not find container \"29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b\": container with ID starting with 29c427671f9f4318c9201e8891e88a883dec2d26242cde46d0b9fe50d364714b not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.276253 4748 scope.go:117] "RemoveContainer" containerID="75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.293934 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3807d8da-105e-4dbb-8446-81b6d4a2ae05" (UID: "3807d8da-105e-4dbb-8446-81b6d4a2ae05"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.303044 4748 scope.go:117] "RemoveContainer" containerID="a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.324269 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4sq2\" (UniqueName: \"kubernetes.io/projected/3807d8da-105e-4dbb-8446-81b6d4a2ae05-kube-api-access-z4sq2\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.324302 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.324315 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3807d8da-105e-4dbb-8446-81b6d4a2ae05-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.324326 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.324337 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrg62\" (UniqueName: \"kubernetes.io/projected/883993d1-6837-4aef-952c-720e2901efb5-kube-api-access-nrg62\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.325992 4748 scope.go:117] "RemoveContainer" containerID="be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.350140 4748 scope.go:117] "RemoveContainer" containerID="75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.350771 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673\": container with ID starting with 75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673 not found: ID does not exist" containerID="75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.350802 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673"} err="failed to get container status \"75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673\": rpc error: code = NotFound desc = could not find container \"75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673\": container with ID starting with 75f6618b0645e6765845bad863e636e302c7106d1c479e094245d3b30ee9c673 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.350835 4748 scope.go:117] "RemoveContainer" containerID="a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.351220 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9\": container with ID starting with a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9 not found: ID does not exist" containerID="a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.351244 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9"} err="failed to get container status \"a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9\": rpc error: code = NotFound desc = could not find container \"a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9\": container with ID starting with a9f089ac5d63ed229345f47ba22fc3dddf473f4cb6735b555e81023e80ba64e9 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.351257 4748 scope.go:117] "RemoveContainer" containerID="be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.352873 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837\": container with ID starting with be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837 not found: ID does not exist" containerID="be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.352931 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837"} err="failed to get container status \"be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837\": rpc error: code = NotFound desc = could not find container \"be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837\": container with ID starting with be4eb096d36751cabca9179463245a34a7cd9c6a14a56ccf2ef7ea6b65760837 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.352964 4748 scope.go:117] "RemoveContainer" containerID="ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.366250 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "883993d1-6837-4aef-952c-720e2901efb5" (UID: "883993d1-6837-4aef-952c-720e2901efb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.368036 4748 scope.go:117] "RemoveContainer" containerID="fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.380309 4748 scope.go:117] "RemoveContainer" containerID="5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.394727 4748 scope.go:117] "RemoveContainer" containerID="ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.395526 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886\": container with ID starting with ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886 not found: ID does not exist" containerID="ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.395572 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886"} err="failed to get container status \"ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886\": rpc error: code = NotFound desc = could not find container \"ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886\": container with ID starting with ae5743310e7cb8f0ccd05e12a627d5b1b63b8e228dc0540571d890a75f126886 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.395600 4748 scope.go:117] "RemoveContainer" containerID="fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.397215 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09\": container with ID starting with fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09 not found: ID does not exist" containerID="fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.397275 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09"} err="failed to get container status \"fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09\": rpc error: code = NotFound desc = could not find container \"fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09\": container with ID starting with fa9c13f974775eea18f104ea9dff35d5c5f9c0a0989c6ab724ce0bf0ba199f09 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.397312 4748 scope.go:117] "RemoveContainer" containerID="5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.398727 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d\": container with ID starting with 5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d not found: ID does not exist" containerID="5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.398767 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d"} err="failed to get container status \"5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d\": rpc error: code = NotFound desc = could not find container \"5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d\": container with ID starting with 5cd03247dcad1f9c4781bc2a63c7ec4c4e839f2cd34ee2716aab9d0b7b8fd00d not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.398788 4748 scope.go:117] "RemoveContainer" containerID="415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.412676 4748 scope.go:117] "RemoveContainer" containerID="d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.425811 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/883993d1-6837-4aef-952c-720e2901efb5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.429911 4748 scope.go:117] "RemoveContainer" containerID="2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.468026 4748 scope.go:117] "RemoveContainer" containerID="415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.477063 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19\": container with ID starting with 415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19 not found: ID does not exist" containerID="415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.477117 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19"} err="failed to get container status \"415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19\": rpc error: code = NotFound desc = could not find container \"415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19\": container with ID starting with 415d331d77ea40682c133096f6eb05504639d985d1a13dba64928f4b26335c19 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.477144 4748 scope.go:117] "RemoveContainer" containerID="d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.479698 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9\": container with ID starting with d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9 not found: ID does not exist" containerID="d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.479747 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9"} err="failed to get container status \"d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9\": rpc error: code = NotFound desc = could not find container \"d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9\": container with ID starting with d3259383d61fcd91559659e7436038ad49719141f1e6cfb2a5e48318c40cd4b9 not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.479771 4748 scope.go:117] "RemoveContainer" containerID="2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f" Feb 16 14:59:25 crc kubenswrapper[4748]: E0216 14:59:25.480762 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f\": container with ID starting with 2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f not found: ID does not exist" containerID="2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.480816 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f"} err="failed to get container status \"2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f\": rpc error: code = NotFound desc = could not find container \"2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f\": container with ID starting with 2b481bb20dce5c88bdc29b354a2468b1c1a9d0c7c04d0f29c2b574e9810f393f not found: ID does not exist" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.484629 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2jwt"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.499203 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2jwt"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.505208 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s9vdw"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.514741 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s9vdw"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.518366 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qm9b7"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.522224 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-qm9b7"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.527587 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2hlms"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.530658 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2hlms"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.538256 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nmd5x"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.542274 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nmd5x"] Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.874237 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jgm9r" Feb 16 14:59:25 crc kubenswrapper[4748]: I0216 14:59:25.938237 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-26scw"] Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.178410 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" event={"ID":"379d499a-4ed6-4e79-ae35-e934ca28ab85","Type":"ContainerStarted","Data":"fad380fc4a34940c956db26d035fa443cb7503776e86439a9bd215d14acb486c"} Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.178458 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" event={"ID":"379d499a-4ed6-4e79-ae35-e934ca28ab85","Type":"ContainerStarted","Data":"b6d8fbc80f9f7bd3fabeed899869fa9e91e654364a90ad55d0daf84893b4dbe7"} Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.178692 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.183130 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.202036 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6nb8t" podStartSLOduration=2.202013841 podStartE2EDuration="2.202013841s" podCreationTimestamp="2026-02-16 14:59:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 14:59:26.196242969 +0000 UTC m=+391.887912018" watchObservedRunningTime="2026-02-16 14:59:26.202013841 +0000 UTC m=+391.893682880" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.786034 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-67snq"] Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.786981 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787012 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787023 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787033 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787048 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787058 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787066 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787073 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787084 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787091 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787103 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787110 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787121 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787128 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787137 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787146 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787155 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787162 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787171 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787179 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787191 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787199 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="extract-utilities" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787208 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787215 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787235 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787243 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="extract-content" Feb 16 14:59:26 crc kubenswrapper[4748]: E0216 14:59:26.787256 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787264 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787389 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787403 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787412 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="883993d1-6837-4aef-952c-720e2901efb5" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787420 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="712cdaea-3348-4e68-8761-842d520bedc6" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787434 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" containerName="registry-server" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.787627 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" containerName="marketplace-operator" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.788293 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.790278 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.798647 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-67snq"] Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.844757 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99a6afe6-d571-490a-b304-1e8727a3b41c-catalog-content\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.844863 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99a6afe6-d571-490a-b304-1e8727a3b41c-utilities\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.844883 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kll8m\" (UniqueName: \"kubernetes.io/projected/99a6afe6-d571-490a-b304-1e8727a3b41c-kube-api-access-kll8m\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.946542 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99a6afe6-d571-490a-b304-1e8727a3b41c-utilities\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.946605 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kll8m\" (UniqueName: \"kubernetes.io/projected/99a6afe6-d571-490a-b304-1e8727a3b41c-kube-api-access-kll8m\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.946706 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99a6afe6-d571-490a-b304-1e8727a3b41c-catalog-content\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.947306 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99a6afe6-d571-490a-b304-1e8727a3b41c-catalog-content\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.947308 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99a6afe6-d571-490a-b304-1e8727a3b41c-utilities\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.967232 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kll8m\" (UniqueName: \"kubernetes.io/projected/99a6afe6-d571-490a-b304-1e8727a3b41c-kube-api-access-kll8m\") pod \"redhat-marketplace-67snq\" (UID: \"99a6afe6-d571-490a-b304-1e8727a3b41c\") " pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.986758 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s5dp2"] Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.987812 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:26 crc kubenswrapper[4748]: I0216 14:59:26.991405 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.006247 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a812e6-7d17-4c40-a9ab-376ef6ab5001" path="/var/lib/kubelet/pods/34a812e6-7d17-4c40-a9ab-376ef6ab5001/volumes" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.013692 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3807d8da-105e-4dbb-8446-81b6d4a2ae05" path="/var/lib/kubelet/pods/3807d8da-105e-4dbb-8446-81b6d4a2ae05/volumes" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.015886 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712cdaea-3348-4e68-8761-842d520bedc6" path="/var/lib/kubelet/pods/712cdaea-3348-4e68-8761-842d520bedc6/volumes" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.016941 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="883993d1-6837-4aef-952c-720e2901efb5" path="/var/lib/kubelet/pods/883993d1-6837-4aef-952c-720e2901efb5/volumes" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.018524 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a62cdd4f-90c2-45c7-893d-35356124bf3c" path="/var/lib/kubelet/pods/a62cdd4f-90c2-45c7-893d-35356124bf3c/volumes" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.019330 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5dp2"] Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.048387 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb12c0d-7fe4-443b-84ef-a50362156745-utilities\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.048429 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvrf4\" (UniqueName: \"kubernetes.io/projected/dfb12c0d-7fe4-443b-84ef-a50362156745-kube-api-access-lvrf4\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.048465 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb12c0d-7fe4-443b-84ef-a50362156745-catalog-content\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.116573 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.149755 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb12c0d-7fe4-443b-84ef-a50362156745-utilities\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.149815 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvrf4\" (UniqueName: \"kubernetes.io/projected/dfb12c0d-7fe4-443b-84ef-a50362156745-kube-api-access-lvrf4\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.149859 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb12c0d-7fe4-443b-84ef-a50362156745-catalog-content\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.150911 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfb12c0d-7fe4-443b-84ef-a50362156745-utilities\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.151150 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfb12c0d-7fe4-443b-84ef-a50362156745-catalog-content\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.170261 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvrf4\" (UniqueName: \"kubernetes.io/projected/dfb12c0d-7fe4-443b-84ef-a50362156745-kube-api-access-lvrf4\") pod \"certified-operators-s5dp2\" (UID: \"dfb12c0d-7fe4-443b-84ef-a50362156745\") " pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.308126 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.334190 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-67snq"] Feb 16 14:59:27 crc kubenswrapper[4748]: W0216 14:59:27.345500 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99a6afe6_d571_490a_b304_1e8727a3b41c.slice/crio-ba9e6805db8d1fe356fe548713ae2d1d4ac12a3ab79266ceb0105f46f654b905 WatchSource:0}: Error finding container ba9e6805db8d1fe356fe548713ae2d1d4ac12a3ab79266ceb0105f46f654b905: Status 404 returned error can't find the container with id ba9e6805db8d1fe356fe548713ae2d1d4ac12a3ab79266ceb0105f46f654b905 Feb 16 14:59:27 crc kubenswrapper[4748]: I0216 14:59:27.517815 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s5dp2"] Feb 16 14:59:27 crc kubenswrapper[4748]: W0216 14:59:27.595012 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfb12c0d_7fe4_443b_84ef_a50362156745.slice/crio-a3c839696bc54026655ef71e2db1838ed2cb7a0f4fa93355583e27241a65e475 WatchSource:0}: Error finding container a3c839696bc54026655ef71e2db1838ed2cb7a0f4fa93355583e27241a65e475: Status 404 returned error can't find the container with id a3c839696bc54026655ef71e2db1838ed2cb7a0f4fa93355583e27241a65e475 Feb 16 14:59:28 crc kubenswrapper[4748]: I0216 14:59:28.192930 4748 generic.go:334] "Generic (PLEG): container finished" podID="dfb12c0d-7fe4-443b-84ef-a50362156745" containerID="0a5c9541407eda46c963b3fa2eb0c8c93a0df17bebb9c55da9607a8dcd7997ae" exitCode=0 Feb 16 14:59:28 crc kubenswrapper[4748]: I0216 14:59:28.193037 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dp2" event={"ID":"dfb12c0d-7fe4-443b-84ef-a50362156745","Type":"ContainerDied","Data":"0a5c9541407eda46c963b3fa2eb0c8c93a0df17bebb9c55da9607a8dcd7997ae"} Feb 16 14:59:28 crc kubenswrapper[4748]: I0216 14:59:28.194943 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dp2" event={"ID":"dfb12c0d-7fe4-443b-84ef-a50362156745","Type":"ContainerStarted","Data":"a3c839696bc54026655ef71e2db1838ed2cb7a0f4fa93355583e27241a65e475"} Feb 16 14:59:28 crc kubenswrapper[4748]: I0216 14:59:28.200120 4748 generic.go:334] "Generic (PLEG): container finished" podID="99a6afe6-d571-490a-b304-1e8727a3b41c" containerID="51cdd22931c71fdd6c2b720186981913bcf098fddf38ec010251f467cfb203ec" exitCode=0 Feb 16 14:59:28 crc kubenswrapper[4748]: I0216 14:59:28.200330 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67snq" event={"ID":"99a6afe6-d571-490a-b304-1e8727a3b41c","Type":"ContainerDied","Data":"51cdd22931c71fdd6c2b720186981913bcf098fddf38ec010251f467cfb203ec"} Feb 16 14:59:28 crc kubenswrapper[4748]: I0216 14:59:28.200382 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67snq" event={"ID":"99a6afe6-d571-490a-b304-1e8727a3b41c","Type":"ContainerStarted","Data":"ba9e6805db8d1fe356fe548713ae2d1d4ac12a3ab79266ceb0105f46f654b905"} Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.183193 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wcmxf"] Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.187910 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.196289 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.202921 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wcmxf"] Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.211313 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dp2" event={"ID":"dfb12c0d-7fe4-443b-84ef-a50362156745","Type":"ContainerStarted","Data":"2d4e7fac117ad6683ff6f6f6e9d17188af83b33d183f4399f0baa684b6198a52"} Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.213469 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67snq" event={"ID":"99a6afe6-d571-490a-b304-1e8727a3b41c","Type":"ContainerStarted","Data":"3bbf6f4eee8247d526f92eb111a2e5140779b0174e199c7f5b3fc246bba2688e"} Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.302903 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjz2n\" (UniqueName: \"kubernetes.io/projected/635b0481-4777-4121-aac7-e967c93db3fe-kube-api-access-jjz2n\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.303010 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b0481-4777-4121-aac7-e967c93db3fe-utilities\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.303068 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b0481-4777-4121-aac7-e967c93db3fe-catalog-content\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.382072 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wk4zf"] Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.399282 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.403638 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.404449 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjz2n\" (UniqueName: \"kubernetes.io/projected/635b0481-4777-4121-aac7-e967c93db3fe-kube-api-access-jjz2n\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.404518 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b0481-4777-4121-aac7-e967c93db3fe-utilities\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.404573 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b0481-4777-4121-aac7-e967c93db3fe-catalog-content\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.404776 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wk4zf"] Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.405348 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/635b0481-4777-4121-aac7-e967c93db3fe-catalog-content\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.405353 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/635b0481-4777-4121-aac7-e967c93db3fe-utilities\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.433169 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjz2n\" (UniqueName: \"kubernetes.io/projected/635b0481-4777-4121-aac7-e967c93db3fe-kube-api-access-jjz2n\") pod \"redhat-operators-wcmxf\" (UID: \"635b0481-4777-4121-aac7-e967c93db3fe\") " pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.506366 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0ce93d-b322-42ac-b2c1-798c2155c41d-utilities\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.506484 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrbn\" (UniqueName: \"kubernetes.io/projected/be0ce93d-b322-42ac-b2c1-798c2155c41d-kube-api-access-jwrbn\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.506559 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0ce93d-b322-42ac-b2c1-798c2155c41d-catalog-content\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.528171 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.607480 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0ce93d-b322-42ac-b2c1-798c2155c41d-catalog-content\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.608080 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0ce93d-b322-42ac-b2c1-798c2155c41d-utilities\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.608323 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrbn\" (UniqueName: \"kubernetes.io/projected/be0ce93d-b322-42ac-b2c1-798c2155c41d-kube-api-access-jwrbn\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.608624 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be0ce93d-b322-42ac-b2c1-798c2155c41d-catalog-content\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.609141 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be0ce93d-b322-42ac-b2c1-798c2155c41d-utilities\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.632308 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrbn\" (UniqueName: \"kubernetes.io/projected/be0ce93d-b322-42ac-b2c1-798c2155c41d-kube-api-access-jwrbn\") pod \"community-operators-wk4zf\" (UID: \"be0ce93d-b322-42ac-b2c1-798c2155c41d\") " pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.742993 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wcmxf"] Feb 16 14:59:29 crc kubenswrapper[4748]: W0216 14:59:29.751452 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod635b0481_4777_4121_aac7_e967c93db3fe.slice/crio-f1831e91cd2d13b9220db59a52faa33feed147e185f6d50aa9993ee602a07f77 WatchSource:0}: Error finding container f1831e91cd2d13b9220db59a52faa33feed147e185f6d50aa9993ee602a07f77: Status 404 returned error can't find the container with id f1831e91cd2d13b9220db59a52faa33feed147e185f6d50aa9993ee602a07f77 Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.780000 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:29 crc kubenswrapper[4748]: I0216 14:59:29.979318 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wk4zf"] Feb 16 14:59:29 crc kubenswrapper[4748]: W0216 14:59:29.989520 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe0ce93d_b322_42ac_b2c1_798c2155c41d.slice/crio-0bae47506b86c127976f44f1bbfbd49a17057413fed4d88d8c38eec31bc12b19 WatchSource:0}: Error finding container 0bae47506b86c127976f44f1bbfbd49a17057413fed4d88d8c38eec31bc12b19: Status 404 returned error can't find the container with id 0bae47506b86c127976f44f1bbfbd49a17057413fed4d88d8c38eec31bc12b19 Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.219672 4748 generic.go:334] "Generic (PLEG): container finished" podID="dfb12c0d-7fe4-443b-84ef-a50362156745" containerID="2d4e7fac117ad6683ff6f6f6e9d17188af83b33d183f4399f0baa684b6198a52" exitCode=0 Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.219756 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dp2" event={"ID":"dfb12c0d-7fe4-443b-84ef-a50362156745","Type":"ContainerDied","Data":"2d4e7fac117ad6683ff6f6f6e9d17188af83b33d183f4399f0baa684b6198a52"} Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.229070 4748 generic.go:334] "Generic (PLEG): container finished" podID="99a6afe6-d571-490a-b304-1e8727a3b41c" containerID="3bbf6f4eee8247d526f92eb111a2e5140779b0174e199c7f5b3fc246bba2688e" exitCode=0 Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.229165 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67snq" event={"ID":"99a6afe6-d571-490a-b304-1e8727a3b41c","Type":"ContainerDied","Data":"3bbf6f4eee8247d526f92eb111a2e5140779b0174e199c7f5b3fc246bba2688e"} Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.234105 4748 generic.go:334] "Generic (PLEG): container finished" podID="be0ce93d-b322-42ac-b2c1-798c2155c41d" containerID="5322a1b557421ec52efa4c02e00df69ccb2ad17632cc4854929b8d14cc03a5a5" exitCode=0 Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.234222 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wk4zf" event={"ID":"be0ce93d-b322-42ac-b2c1-798c2155c41d","Type":"ContainerDied","Data":"5322a1b557421ec52efa4c02e00df69ccb2ad17632cc4854929b8d14cc03a5a5"} Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.234258 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wk4zf" event={"ID":"be0ce93d-b322-42ac-b2c1-798c2155c41d","Type":"ContainerStarted","Data":"0bae47506b86c127976f44f1bbfbd49a17057413fed4d88d8c38eec31bc12b19"} Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.243488 4748 generic.go:334] "Generic (PLEG): container finished" podID="635b0481-4777-4121-aac7-e967c93db3fe" containerID="f325ef7c4cdfb4fe345a09c644690736d2a4f3a502d359430e8269eed77d4bda" exitCode=0 Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.243527 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcmxf" event={"ID":"635b0481-4777-4121-aac7-e967c93db3fe","Type":"ContainerDied","Data":"f325ef7c4cdfb4fe345a09c644690736d2a4f3a502d359430e8269eed77d4bda"} Feb 16 14:59:30 crc kubenswrapper[4748]: I0216 14:59:30.243554 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcmxf" event={"ID":"635b0481-4777-4121-aac7-e967c93db3fe","Type":"ContainerStarted","Data":"f1831e91cd2d13b9220db59a52faa33feed147e185f6d50aa9993ee602a07f77"} Feb 16 14:59:31 crc kubenswrapper[4748]: I0216 14:59:31.254707 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wk4zf" event={"ID":"be0ce93d-b322-42ac-b2c1-798c2155c41d","Type":"ContainerStarted","Data":"3ad90cbf993e1802f2c3aabbaa4eb8fcbe137d0b60e5b0cbb3b7692d8c560f61"} Feb 16 14:59:31 crc kubenswrapper[4748]: I0216 14:59:31.257663 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcmxf" event={"ID":"635b0481-4777-4121-aac7-e967c93db3fe","Type":"ContainerStarted","Data":"2976b03e244e227093c4c1339a1cbf9da988afddefbd125392bd9bad917ce19d"} Feb 16 14:59:31 crc kubenswrapper[4748]: I0216 14:59:31.261661 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s5dp2" event={"ID":"dfb12c0d-7fe4-443b-84ef-a50362156745","Type":"ContainerStarted","Data":"bbb7e1bb92b059b3a6d70650f81b69c93fca74f90ee8344e8c19654324196048"} Feb 16 14:59:31 crc kubenswrapper[4748]: I0216 14:59:31.265502 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-67snq" event={"ID":"99a6afe6-d571-490a-b304-1e8727a3b41c","Type":"ContainerStarted","Data":"73b6fa8fe4fb1c7181659c2c77d60be691ef6dd43d50f5ba443ac6a2899ee43b"} Feb 16 14:59:31 crc kubenswrapper[4748]: I0216 14:59:31.304734 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-s5dp2" podStartSLOduration=2.6423190720000003 podStartE2EDuration="5.304696727s" podCreationTimestamp="2026-02-16 14:59:26 +0000 UTC" firstStartedPulling="2026-02-16 14:59:28.195499805 +0000 UTC m=+393.887168884" lastFinishedPulling="2026-02-16 14:59:30.8578775 +0000 UTC m=+396.549546539" observedRunningTime="2026-02-16 14:59:31.301675943 +0000 UTC m=+396.993344982" watchObservedRunningTime="2026-02-16 14:59:31.304696727 +0000 UTC m=+396.996365766" Feb 16 14:59:31 crc kubenswrapper[4748]: I0216 14:59:31.328067 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-67snq" podStartSLOduration=2.925701074 podStartE2EDuration="5.328045731s" podCreationTimestamp="2026-02-16 14:59:26 +0000 UTC" firstStartedPulling="2026-02-16 14:59:28.202552828 +0000 UTC m=+393.894221867" lastFinishedPulling="2026-02-16 14:59:30.604897475 +0000 UTC m=+396.296566524" observedRunningTime="2026-02-16 14:59:31.320608138 +0000 UTC m=+397.012277187" watchObservedRunningTime="2026-02-16 14:59:31.328045731 +0000 UTC m=+397.019714780" Feb 16 14:59:32 crc kubenswrapper[4748]: I0216 14:59:32.273798 4748 generic.go:334] "Generic (PLEG): container finished" podID="be0ce93d-b322-42ac-b2c1-798c2155c41d" containerID="3ad90cbf993e1802f2c3aabbaa4eb8fcbe137d0b60e5b0cbb3b7692d8c560f61" exitCode=0 Feb 16 14:59:32 crc kubenswrapper[4748]: I0216 14:59:32.273913 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wk4zf" event={"ID":"be0ce93d-b322-42ac-b2c1-798c2155c41d","Type":"ContainerDied","Data":"3ad90cbf993e1802f2c3aabbaa4eb8fcbe137d0b60e5b0cbb3b7692d8c560f61"} Feb 16 14:59:32 crc kubenswrapper[4748]: I0216 14:59:32.276207 4748 generic.go:334] "Generic (PLEG): container finished" podID="635b0481-4777-4121-aac7-e967c93db3fe" containerID="2976b03e244e227093c4c1339a1cbf9da988afddefbd125392bd9bad917ce19d" exitCode=0 Feb 16 14:59:32 crc kubenswrapper[4748]: I0216 14:59:32.276624 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcmxf" event={"ID":"635b0481-4777-4121-aac7-e967c93db3fe","Type":"ContainerDied","Data":"2976b03e244e227093c4c1339a1cbf9da988afddefbd125392bd9bad917ce19d"} Feb 16 14:59:33 crc kubenswrapper[4748]: I0216 14:59:33.284018 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wk4zf" event={"ID":"be0ce93d-b322-42ac-b2c1-798c2155c41d","Type":"ContainerStarted","Data":"6098608174f5f39d3001136d558fb85c74b8025a3b5eff6391ddff9f3a2c4af1"} Feb 16 14:59:33 crc kubenswrapper[4748]: I0216 14:59:33.286411 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcmxf" event={"ID":"635b0481-4777-4121-aac7-e967c93db3fe","Type":"ContainerStarted","Data":"c7103b0fba8d717703821c3fa60e9732c68866ed7f1744a896d4a8fac5a235be"} Feb 16 14:59:33 crc kubenswrapper[4748]: I0216 14:59:33.303168 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wk4zf" podStartSLOduration=1.78498467 podStartE2EDuration="4.303150892s" podCreationTimestamp="2026-02-16 14:59:29 +0000 UTC" firstStartedPulling="2026-02-16 14:59:30.252260612 +0000 UTC m=+395.943929651" lastFinishedPulling="2026-02-16 14:59:32.770426834 +0000 UTC m=+398.462095873" observedRunningTime="2026-02-16 14:59:33.30103268 +0000 UTC m=+398.992701719" watchObservedRunningTime="2026-02-16 14:59:33.303150892 +0000 UTC m=+398.994819931" Feb 16 14:59:33 crc kubenswrapper[4748]: I0216 14:59:33.317349 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wcmxf" podStartSLOduration=1.893061394 podStartE2EDuration="4.31733589s" podCreationTimestamp="2026-02-16 14:59:29 +0000 UTC" firstStartedPulling="2026-02-16 14:59:30.252236341 +0000 UTC m=+395.943905370" lastFinishedPulling="2026-02-16 14:59:32.676510827 +0000 UTC m=+398.368179866" observedRunningTime="2026-02-16 14:59:33.317204157 +0000 UTC m=+399.008873186" watchObservedRunningTime="2026-02-16 14:59:33.31733589 +0000 UTC m=+399.009004929" Feb 16 14:59:34 crc kubenswrapper[4748]: I0216 14:59:34.730146 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 14:59:34 crc kubenswrapper[4748]: I0216 14:59:34.731165 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 14:59:37 crc kubenswrapper[4748]: I0216 14:59:37.117766 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:37 crc kubenswrapper[4748]: I0216 14:59:37.118083 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:37 crc kubenswrapper[4748]: I0216 14:59:37.168518 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:37 crc kubenswrapper[4748]: I0216 14:59:37.308969 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:37 crc kubenswrapper[4748]: I0216 14:59:37.309015 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:37 crc kubenswrapper[4748]: I0216 14:59:37.369756 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-67snq" Feb 16 14:59:37 crc kubenswrapper[4748]: I0216 14:59:37.373115 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:38 crc kubenswrapper[4748]: I0216 14:59:38.377958 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-s5dp2" Feb 16 14:59:39 crc kubenswrapper[4748]: I0216 14:59:39.529283 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:39 crc kubenswrapper[4748]: I0216 14:59:39.529821 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:39 crc kubenswrapper[4748]: I0216 14:59:39.594992 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:39 crc kubenswrapper[4748]: I0216 14:59:39.781155 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:39 crc kubenswrapper[4748]: I0216 14:59:39.781678 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:39 crc kubenswrapper[4748]: I0216 14:59:39.830143 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:40 crc kubenswrapper[4748]: I0216 14:59:40.376318 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wcmxf" Feb 16 14:59:40 crc kubenswrapper[4748]: I0216 14:59:40.386528 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wk4zf" Feb 16 14:59:50 crc kubenswrapper[4748]: I0216 14:59:50.990607 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" podUID="a480239a-6f26-4189-9a3c-17896449a6e3" containerName="registry" containerID="cri-o://216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1" gracePeriod=30 Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.375336 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.405274 4748 generic.go:334] "Generic (PLEG): container finished" podID="a480239a-6f26-4189-9a3c-17896449a6e3" containerID="216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1" exitCode=0 Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.405329 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.405329 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" event={"ID":"a480239a-6f26-4189-9a3c-17896449a6e3","Type":"ContainerDied","Data":"216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1"} Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.405479 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-26scw" event={"ID":"a480239a-6f26-4189-9a3c-17896449a6e3","Type":"ContainerDied","Data":"ca0d7f83c8b659fab9fc0c4194ed2a2cb4bc6ec2ac1b6a948c43689bd757843d"} Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.405524 4748 scope.go:117] "RemoveContainer" containerID="216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.421572 4748 scope.go:117] "RemoveContainer" containerID="216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1" Feb 16 14:59:52 crc kubenswrapper[4748]: E0216 14:59:52.421972 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1\": container with ID starting with 216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1 not found: ID does not exist" containerID="216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.422021 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1"} err="failed to get container status \"216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1\": rpc error: code = NotFound desc = could not find container \"216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1\": container with ID starting with 216ee34fb1ae035cd3b87429e123ad268db0c1da20c71028cdc8cfe65470d5d1 not found: ID does not exist" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563209 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-registry-tls\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563312 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-registry-certificates\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563545 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563587 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-bound-sa-token\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563627 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a480239a-6f26-4189-9a3c-17896449a6e3-installation-pull-secrets\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563676 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-trusted-ca\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563769 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcxjr\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-kube-api-access-dcxjr\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.563847 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a480239a-6f26-4189-9a3c-17896449a6e3-ca-trust-extracted\") pod \"a480239a-6f26-4189-9a3c-17896449a6e3\" (UID: \"a480239a-6f26-4189-9a3c-17896449a6e3\") " Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.564261 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.564986 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.571429 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.571471 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a480239a-6f26-4189-9a3c-17896449a6e3-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.571907 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.572132 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-kube-api-access-dcxjr" (OuterVolumeSpecName: "kube-api-access-dcxjr") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "kube-api-access-dcxjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.574031 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.591040 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a480239a-6f26-4189-9a3c-17896449a6e3-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a480239a-6f26-4189-9a3c-17896449a6e3" (UID: "a480239a-6f26-4189-9a3c-17896449a6e3"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.665311 4748 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a480239a-6f26-4189-9a3c-17896449a6e3-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.665372 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.665384 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcxjr\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-kube-api-access-dcxjr\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.665394 4748 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a480239a-6f26-4189-9a3c-17896449a6e3-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.665402 4748 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.665410 4748 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a480239a-6f26-4189-9a3c-17896449a6e3-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.665418 4748 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a480239a-6f26-4189-9a3c-17896449a6e3-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.738068 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-26scw"] Feb 16 14:59:52 crc kubenswrapper[4748]: I0216 14:59:52.741853 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-26scw"] Feb 16 14:59:53 crc kubenswrapper[4748]: I0216 14:59:53.006013 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a480239a-6f26-4189-9a3c-17896449a6e3" path="/var/lib/kubelet/pods/a480239a-6f26-4189-9a3c-17896449a6e3/volumes" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.188353 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8"] Feb 16 15:00:00 crc kubenswrapper[4748]: E0216 15:00:00.189461 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a480239a-6f26-4189-9a3c-17896449a6e3" containerName="registry" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.189489 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a480239a-6f26-4189-9a3c-17896449a6e3" containerName="registry" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.189655 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="a480239a-6f26-4189-9a3c-17896449a6e3" containerName="registry" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.190259 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.192257 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.192896 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.197800 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8"] Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.223175 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb70c99c-1f99-4c49-86da-0b2102f52ea3-config-volume\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.223329 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb70c99c-1f99-4c49-86da-0b2102f52ea3-secret-volume\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.223458 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdsqc\" (UniqueName: \"kubernetes.io/projected/eb70c99c-1f99-4c49-86da-0b2102f52ea3-kube-api-access-sdsqc\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.324129 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb70c99c-1f99-4c49-86da-0b2102f52ea3-config-volume\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.324196 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb70c99c-1f99-4c49-86da-0b2102f52ea3-secret-volume\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.324262 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdsqc\" (UniqueName: \"kubernetes.io/projected/eb70c99c-1f99-4c49-86da-0b2102f52ea3-kube-api-access-sdsqc\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.325907 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb70c99c-1f99-4c49-86da-0b2102f52ea3-config-volume\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.332965 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb70c99c-1f99-4c49-86da-0b2102f52ea3-secret-volume\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.347100 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdsqc\" (UniqueName: \"kubernetes.io/projected/eb70c99c-1f99-4c49-86da-0b2102f52ea3-kube-api-access-sdsqc\") pod \"collect-profiles-29520900-x7gz8\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.530144 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:00 crc kubenswrapper[4748]: I0216 15:00:00.753945 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8"] Feb 16 15:00:01 crc kubenswrapper[4748]: I0216 15:00:01.478700 4748 generic.go:334] "Generic (PLEG): container finished" podID="eb70c99c-1f99-4c49-86da-0b2102f52ea3" containerID="6cb6888f97f2d383da1c31f1d03810f068c44e35e47c021e5ffecadc892a82b4" exitCode=0 Feb 16 15:00:01 crc kubenswrapper[4748]: I0216 15:00:01.479052 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" event={"ID":"eb70c99c-1f99-4c49-86da-0b2102f52ea3","Type":"ContainerDied","Data":"6cb6888f97f2d383da1c31f1d03810f068c44e35e47c021e5ffecadc892a82b4"} Feb 16 15:00:01 crc kubenswrapper[4748]: I0216 15:00:01.479094 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" event={"ID":"eb70c99c-1f99-4c49-86da-0b2102f52ea3","Type":"ContainerStarted","Data":"acedee06b94ce659ca2b2b7d645131a9fc738b6598465d3a1772d92046c1d788"} Feb 16 15:00:02 crc kubenswrapper[4748]: I0216 15:00:02.783121 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:02 crc kubenswrapper[4748]: I0216 15:00:02.974956 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb70c99c-1f99-4c49-86da-0b2102f52ea3-secret-volume\") pod \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " Feb 16 15:00:02 crc kubenswrapper[4748]: I0216 15:00:02.975014 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdsqc\" (UniqueName: \"kubernetes.io/projected/eb70c99c-1f99-4c49-86da-0b2102f52ea3-kube-api-access-sdsqc\") pod \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " Feb 16 15:00:02 crc kubenswrapper[4748]: I0216 15:00:02.975051 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb70c99c-1f99-4c49-86da-0b2102f52ea3-config-volume\") pod \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\" (UID: \"eb70c99c-1f99-4c49-86da-0b2102f52ea3\") " Feb 16 15:00:02 crc kubenswrapper[4748]: I0216 15:00:02.975936 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb70c99c-1f99-4c49-86da-0b2102f52ea3-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb70c99c-1f99-4c49-86da-0b2102f52ea3" (UID: "eb70c99c-1f99-4c49-86da-0b2102f52ea3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4748]: I0216 15:00:02.983806 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb70c99c-1f99-4c49-86da-0b2102f52ea3-kube-api-access-sdsqc" (OuterVolumeSpecName: "kube-api-access-sdsqc") pod "eb70c99c-1f99-4c49-86da-0b2102f52ea3" (UID: "eb70c99c-1f99-4c49-86da-0b2102f52ea3"). InnerVolumeSpecName "kube-api-access-sdsqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:00:02 crc kubenswrapper[4748]: I0216 15:00:02.984011 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb70c99c-1f99-4c49-86da-0b2102f52ea3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb70c99c-1f99-4c49-86da-0b2102f52ea3" (UID: "eb70c99c-1f99-4c49-86da-0b2102f52ea3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:00:03 crc kubenswrapper[4748]: I0216 15:00:03.076863 4748 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb70c99c-1f99-4c49-86da-0b2102f52ea3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:03 crc kubenswrapper[4748]: I0216 15:00:03.077281 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdsqc\" (UniqueName: \"kubernetes.io/projected/eb70c99c-1f99-4c49-86da-0b2102f52ea3-kube-api-access-sdsqc\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:03 crc kubenswrapper[4748]: I0216 15:00:03.077309 4748 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb70c99c-1f99-4c49-86da-0b2102f52ea3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:00:03 crc kubenswrapper[4748]: I0216 15:00:03.495276 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" event={"ID":"eb70c99c-1f99-4c49-86da-0b2102f52ea3","Type":"ContainerDied","Data":"acedee06b94ce659ca2b2b7d645131a9fc738b6598465d3a1772d92046c1d788"} Feb 16 15:00:03 crc kubenswrapper[4748]: I0216 15:00:03.495368 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acedee06b94ce659ca2b2b7d645131a9fc738b6598465d3a1772d92046c1d788" Feb 16 15:00:03 crc kubenswrapper[4748]: I0216 15:00:03.495463 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8" Feb 16 15:00:04 crc kubenswrapper[4748]: I0216 15:00:04.729981 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:00:04 crc kubenswrapper[4748]: I0216 15:00:04.730060 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:00:04 crc kubenswrapper[4748]: I0216 15:00:04.730123 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:00:04 crc kubenswrapper[4748]: I0216 15:00:04.731229 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f548aebb058ac34928d579a653cfc879fc0a3301fa7df0272936a4d6928bf41"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:00:04 crc kubenswrapper[4748]: I0216 15:00:04.731328 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://1f548aebb058ac34928d579a653cfc879fc0a3301fa7df0272936a4d6928bf41" gracePeriod=600 Feb 16 15:00:05 crc kubenswrapper[4748]: I0216 15:00:05.511752 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="1f548aebb058ac34928d579a653cfc879fc0a3301fa7df0272936a4d6928bf41" exitCode=0 Feb 16 15:00:05 crc kubenswrapper[4748]: I0216 15:00:05.511844 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"1f548aebb058ac34928d579a653cfc879fc0a3301fa7df0272936a4d6928bf41"} Feb 16 15:00:05 crc kubenswrapper[4748]: I0216 15:00:05.512125 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"e3a50946846049e1054f070fd6aacdb230e1f890e503599113e81908b3aa8a60"} Feb 16 15:00:05 crc kubenswrapper[4748]: I0216 15:00:05.512155 4748 scope.go:117] "RemoveContainer" containerID="80510408700c70453c790a95a0a710c6fd8717909a902c989b8ec40ef8d91750" Feb 16 15:02:04 crc kubenswrapper[4748]: I0216 15:02:04.729837 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:02:04 crc kubenswrapper[4748]: I0216 15:02:04.730776 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:02:34 crc kubenswrapper[4748]: I0216 15:02:34.729783 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:02:34 crc kubenswrapper[4748]: I0216 15:02:34.730965 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:02:55 crc kubenswrapper[4748]: I0216 15:02:55.372256 4748 scope.go:117] "RemoveContainer" containerID="cb29c369b9667b65fdf0586add8ba2b05fcb0194dd140e740b603511a7178fb7" Feb 16 15:03:04 crc kubenswrapper[4748]: I0216 15:03:04.729659 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:03:04 crc kubenswrapper[4748]: I0216 15:03:04.730366 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:03:04 crc kubenswrapper[4748]: I0216 15:03:04.730441 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:03:04 crc kubenswrapper[4748]: I0216 15:03:04.731398 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e3a50946846049e1054f070fd6aacdb230e1f890e503599113e81908b3aa8a60"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:03:04 crc kubenswrapper[4748]: I0216 15:03:04.731513 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://e3a50946846049e1054f070fd6aacdb230e1f890e503599113e81908b3aa8a60" gracePeriod=600 Feb 16 15:03:05 crc kubenswrapper[4748]: I0216 15:03:05.378063 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="e3a50946846049e1054f070fd6aacdb230e1f890e503599113e81908b3aa8a60" exitCode=0 Feb 16 15:03:05 crc kubenswrapper[4748]: I0216 15:03:05.378445 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"e3a50946846049e1054f070fd6aacdb230e1f890e503599113e81908b3aa8a60"} Feb 16 15:03:05 crc kubenswrapper[4748]: I0216 15:03:05.378478 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"4e49a4e16e48e1a2e71e7ad48688259341e07a21e8d2998708ad1242f0a4ff61"} Feb 16 15:03:05 crc kubenswrapper[4748]: I0216 15:03:05.378495 4748 scope.go:117] "RemoveContainer" containerID="1f548aebb058ac34928d579a653cfc879fc0a3301fa7df0272936a4d6928bf41" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.910677 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc"] Feb 16 15:04:16 crc kubenswrapper[4748]: E0216 15:04:16.911394 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb70c99c-1f99-4c49-86da-0b2102f52ea3" containerName="collect-profiles" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.911405 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb70c99c-1f99-4c49-86da-0b2102f52ea3" containerName="collect-profiles" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.911489 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb70c99c-1f99-4c49-86da-0b2102f52ea3" containerName="collect-profiles" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.912158 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.914117 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.922751 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc"] Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.935038 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbh2p\" (UniqueName: \"kubernetes.io/projected/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-kube-api-access-jbh2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.935120 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:16 crc kubenswrapper[4748]: I0216 15:04:16.935151 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.036152 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.036204 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.036922 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.037215 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbh2p\" (UniqueName: \"kubernetes.io/projected/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-kube-api-access-jbh2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.037648 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.076310 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbh2p\" (UniqueName: \"kubernetes.io/projected/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-kube-api-access-jbh2p\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.226327 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.482637 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc"] Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.890757 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" event={"ID":"ea460b26-21e3-40f4-a7bb-377fbc91eb7c","Type":"ContainerStarted","Data":"119da72c4dce8e1e2d008f74913fbe8e7aadf81390a08da3d73f55720c0bbfcb"} Feb 16 15:04:17 crc kubenswrapper[4748]: I0216 15:04:17.891379 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" event={"ID":"ea460b26-21e3-40f4-a7bb-377fbc91eb7c","Type":"ContainerStarted","Data":"1d3d64467940aada4b18a0684bcdf22ae3310ddae747dec8fd3dea2de7950b88"} Feb 16 15:04:18 crc kubenswrapper[4748]: I0216 15:04:18.899480 4748 generic.go:334] "Generic (PLEG): container finished" podID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerID="119da72c4dce8e1e2d008f74913fbe8e7aadf81390a08da3d73f55720c0bbfcb" exitCode=0 Feb 16 15:04:18 crc kubenswrapper[4748]: I0216 15:04:18.899556 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" event={"ID":"ea460b26-21e3-40f4-a7bb-377fbc91eb7c","Type":"ContainerDied","Data":"119da72c4dce8e1e2d008f74913fbe8e7aadf81390a08da3d73f55720c0bbfcb"} Feb 16 15:04:18 crc kubenswrapper[4748]: I0216 15:04:18.901452 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:04:20 crc kubenswrapper[4748]: I0216 15:04:20.920020 4748 generic.go:334] "Generic (PLEG): container finished" podID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerID="acb575852cbf51f982535a56c66d723eed7f6525780bbd789d84c6892553b30f" exitCode=0 Feb 16 15:04:20 crc kubenswrapper[4748]: I0216 15:04:20.920146 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" event={"ID":"ea460b26-21e3-40f4-a7bb-377fbc91eb7c","Type":"ContainerDied","Data":"acb575852cbf51f982535a56c66d723eed7f6525780bbd789d84c6892553b30f"} Feb 16 15:04:21 crc kubenswrapper[4748]: I0216 15:04:21.931096 4748 generic.go:334] "Generic (PLEG): container finished" podID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerID="225aeba2bd6579d897e74df787003e46879019e9d171200722a759a58be0f46a" exitCode=0 Feb 16 15:04:21 crc kubenswrapper[4748]: I0216 15:04:21.931186 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" event={"ID":"ea460b26-21e3-40f4-a7bb-377fbc91eb7c","Type":"ContainerDied","Data":"225aeba2bd6579d897e74df787003e46879019e9d171200722a759a58be0f46a"} Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.204575 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.226538 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbh2p\" (UniqueName: \"kubernetes.io/projected/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-kube-api-access-jbh2p\") pod \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.226643 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-bundle\") pod \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.226693 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-util\") pod \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\" (UID: \"ea460b26-21e3-40f4-a7bb-377fbc91eb7c\") " Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.230223 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-bundle" (OuterVolumeSpecName: "bundle") pod "ea460b26-21e3-40f4-a7bb-377fbc91eb7c" (UID: "ea460b26-21e3-40f4-a7bb-377fbc91eb7c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.233453 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-kube-api-access-jbh2p" (OuterVolumeSpecName: "kube-api-access-jbh2p") pod "ea460b26-21e3-40f4-a7bb-377fbc91eb7c" (UID: "ea460b26-21e3-40f4-a7bb-377fbc91eb7c"). InnerVolumeSpecName "kube-api-access-jbh2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.327592 4748 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.327632 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jbh2p\" (UniqueName: \"kubernetes.io/projected/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-kube-api-access-jbh2p\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.386912 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-util" (OuterVolumeSpecName: "util") pod "ea460b26-21e3-40f4-a7bb-377fbc91eb7c" (UID: "ea460b26-21e3-40f4-a7bb-377fbc91eb7c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.428618 4748 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea460b26-21e3-40f4-a7bb-377fbc91eb7c-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.948503 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" event={"ID":"ea460b26-21e3-40f4-a7bb-377fbc91eb7c","Type":"ContainerDied","Data":"1d3d64467940aada4b18a0684bcdf22ae3310ddae747dec8fd3dea2de7950b88"} Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.948927 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3d64467940aada4b18a0684bcdf22ae3310ddae747dec8fd3dea2de7950b88" Feb 16 15:04:23 crc kubenswrapper[4748]: I0216 15:04:23.948644 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.304973 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6"] Feb 16 15:04:32 crc kubenswrapper[4748]: E0216 15:04:32.306095 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerName="extract" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.306111 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerName="extract" Feb 16 15:04:32 crc kubenswrapper[4748]: E0216 15:04:32.306130 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerName="pull" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.306138 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerName="pull" Feb 16 15:04:32 crc kubenswrapper[4748]: E0216 15:04:32.306156 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerName="util" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.306163 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerName="util" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.306277 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea460b26-21e3-40f4-a7bb-377fbc91eb7c" containerName="extract" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.306815 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.310199 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.310698 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-ssgjq" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.310802 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.326552 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.352647 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6fh\" (UniqueName: \"kubernetes.io/projected/f16a6a7b-f861-465e-bfc7-6c94642de504-kube-api-access-th6fh\") pod \"obo-prometheus-operator-68bc856cb9-6scp6\" (UID: \"f16a6a7b-f861-465e-bfc7-6c94642de504\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.422353 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.423213 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.425812 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.427927 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-fqcd6" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.436937 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.438454 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.440147 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.454497 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th6fh\" (UniqueName: \"kubernetes.io/projected/f16a6a7b-f861-465e-bfc7-6c94642de504-kube-api-access-th6fh\") pod \"obo-prometheus-operator-68bc856cb9-6scp6\" (UID: \"f16a6a7b-f861-465e-bfc7-6c94642de504\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.454556 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt\" (UID: \"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.454591 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a879c65c-71d1-4772-b4cc-6d30cbc5210f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7\" (UID: \"a879c65c-71d1-4772-b4cc-6d30cbc5210f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.454619 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt\" (UID: \"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.454671 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a879c65c-71d1-4772-b4cc-6d30cbc5210f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7\" (UID: \"a879c65c-71d1-4772-b4cc-6d30cbc5210f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.468418 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.484040 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th6fh\" (UniqueName: \"kubernetes.io/projected/f16a6a7b-f861-465e-bfc7-6c94642de504-kube-api-access-th6fh\") pod \"obo-prometheus-operator-68bc856cb9-6scp6\" (UID: \"f16a6a7b-f861-465e-bfc7-6c94642de504\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.556336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a879c65c-71d1-4772-b4cc-6d30cbc5210f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7\" (UID: \"a879c65c-71d1-4772-b4cc-6d30cbc5210f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.557461 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt\" (UID: \"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.557560 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a879c65c-71d1-4772-b4cc-6d30cbc5210f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7\" (UID: \"a879c65c-71d1-4772-b4cc-6d30cbc5210f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.557650 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt\" (UID: \"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.561781 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt\" (UID: \"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.563124 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a879c65c-71d1-4772-b4cc-6d30cbc5210f-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7\" (UID: \"a879c65c-71d1-4772-b4cc-6d30cbc5210f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.563584 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt\" (UID: \"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.567090 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a879c65c-71d1-4772-b4cc-6d30cbc5210f-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7\" (UID: \"a879c65c-71d1-4772-b4cc-6d30cbc5210f\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.624656 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.651325 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6x2nh"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.652203 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.654782 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rctz8" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.666334 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.693598 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6x2nh"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.740888 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.752992 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.760582 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9060b85c-a31e-4caa-9552-e2d2a4e7cba5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6x2nh\" (UID: \"9060b85c-a31e-4caa-9552-e2d2a4e7cba5\") " pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.760681 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-565d6\" (UniqueName: \"kubernetes.io/projected/9060b85c-a31e-4caa-9552-e2d2a4e7cba5-kube-api-access-565d6\") pod \"observability-operator-59bdc8b94-6x2nh\" (UID: \"9060b85c-a31e-4caa-9552-e2d2a4e7cba5\") " pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.875750 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9vb7j"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.877243 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-565d6\" (UniqueName: \"kubernetes.io/projected/9060b85c-a31e-4caa-9552-e2d2a4e7cba5-kube-api-access-565d6\") pod \"observability-operator-59bdc8b94-6x2nh\" (UID: \"9060b85c-a31e-4caa-9552-e2d2a4e7cba5\") " pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.893511 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9060b85c-a31e-4caa-9552-e2d2a4e7cba5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6x2nh\" (UID: \"9060b85c-a31e-4caa-9552-e2d2a4e7cba5\") " pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.893512 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9vb7j"] Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.893583 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.896759 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-tb4lc" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.899777 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9060b85c-a31e-4caa-9552-e2d2a4e7cba5-observability-operator-tls\") pod \"observability-operator-59bdc8b94-6x2nh\" (UID: \"9060b85c-a31e-4caa-9552-e2d2a4e7cba5\") " pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.916911 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-565d6\" (UniqueName: \"kubernetes.io/projected/9060b85c-a31e-4caa-9552-e2d2a4e7cba5-kube-api-access-565d6\") pod \"observability-operator-59bdc8b94-6x2nh\" (UID: \"9060b85c-a31e-4caa-9552-e2d2a4e7cba5\") " pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.995836 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3019993-87d0-4427-a928-fa01e0a0f419-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9vb7j\" (UID: \"c3019993-87d0-4427-a928-fa01e0a0f419\") " pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:32 crc kubenswrapper[4748]: I0216 15:04:32.995902 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svnfw\" (UniqueName: \"kubernetes.io/projected/c3019993-87d0-4427-a928-fa01e0a0f419-kube-api-access-svnfw\") pod \"perses-operator-5bf474d74f-9vb7j\" (UID: \"c3019993-87d0-4427-a928-fa01e0a0f419\") " pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.014991 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.097206 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3019993-87d0-4427-a928-fa01e0a0f419-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9vb7j\" (UID: \"c3019993-87d0-4427-a928-fa01e0a0f419\") " pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.097274 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svnfw\" (UniqueName: \"kubernetes.io/projected/c3019993-87d0-4427-a928-fa01e0a0f419-kube-api-access-svnfw\") pod \"perses-operator-5bf474d74f-9vb7j\" (UID: \"c3019993-87d0-4427-a928-fa01e0a0f419\") " pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.100512 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/c3019993-87d0-4427-a928-fa01e0a0f419-openshift-service-ca\") pod \"perses-operator-5bf474d74f-9vb7j\" (UID: \"c3019993-87d0-4427-a928-fa01e0a0f419\") " pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.127384 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svnfw\" (UniqueName: \"kubernetes.io/projected/c3019993-87d0-4427-a928-fa01e0a0f419-kube-api-access-svnfw\") pod \"perses-operator-5bf474d74f-9vb7j\" (UID: \"c3019993-87d0-4427-a928-fa01e0a0f419\") " pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.184816 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7"] Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.194520 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6"] Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.233595 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.318471 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt"] Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.327342 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-6x2nh"] Feb 16 15:04:33 crc kubenswrapper[4748]: W0216 15:04:33.335167 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3c1d3ac_7949_4827_8cd5_e0ac8f8d280c.slice/crio-e11181b36455349a5e33c11fbe15d16c3b4f809ca73f99b9c4729a6219e07f8e WatchSource:0}: Error finding container e11181b36455349a5e33c11fbe15d16c3b4f809ca73f99b9c4729a6219e07f8e: Status 404 returned error can't find the container with id e11181b36455349a5e33c11fbe15d16c3b4f809ca73f99b9c4729a6219e07f8e Feb 16 15:04:33 crc kubenswrapper[4748]: I0216 15:04:33.494259 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-9vb7j"] Feb 16 15:04:33 crc kubenswrapper[4748]: W0216 15:04:33.501440 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3019993_87d0_4427_a928_fa01e0a0f419.slice/crio-645750b4ffeb724fbf2b568dbdb5ecaa09fbc4f64f42850201cd93084ed90205 WatchSource:0}: Error finding container 645750b4ffeb724fbf2b568dbdb5ecaa09fbc4f64f42850201cd93084ed90205: Status 404 returned error can't find the container with id 645750b4ffeb724fbf2b568dbdb5ecaa09fbc4f64f42850201cd93084ed90205 Feb 16 15:04:34 crc kubenswrapper[4748]: I0216 15:04:34.024327 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" event={"ID":"f16a6a7b-f861-465e-bfc7-6c94642de504","Type":"ContainerStarted","Data":"67cb5b439515aded8f1c33590e919a111d032f13a09e0a4adfa655768f5dc26c"} Feb 16 15:04:34 crc kubenswrapper[4748]: I0216 15:04:34.026541 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" event={"ID":"9060b85c-a31e-4caa-9552-e2d2a4e7cba5","Type":"ContainerStarted","Data":"8c0152ee533352c3eb244f218e3a9a1b41b503831ef1fc2a2d2b111af8109bfd"} Feb 16 15:04:34 crc kubenswrapper[4748]: I0216 15:04:34.027860 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" event={"ID":"c3019993-87d0-4427-a928-fa01e0a0f419","Type":"ContainerStarted","Data":"645750b4ffeb724fbf2b568dbdb5ecaa09fbc4f64f42850201cd93084ed90205"} Feb 16 15:04:34 crc kubenswrapper[4748]: I0216 15:04:34.028906 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" event={"ID":"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c","Type":"ContainerStarted","Data":"e11181b36455349a5e33c11fbe15d16c3b4f809ca73f99b9c4729a6219e07f8e"} Feb 16 15:04:34 crc kubenswrapper[4748]: I0216 15:04:34.029778 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" event={"ID":"a879c65c-71d1-4772-b4cc-6d30cbc5210f","Type":"ContainerStarted","Data":"93176572af88cdc17778f771c6989749eb413c064ab80827124484ec353305ff"} Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.160066 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" event={"ID":"f16a6a7b-f861-465e-bfc7-6c94642de504","Type":"ContainerStarted","Data":"f993d8e0ee47703a07828ed87ee4af9e1e5d05f94f732917d40d81d486bfff7b"} Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.174867 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" event={"ID":"9060b85c-a31e-4caa-9552-e2d2a4e7cba5","Type":"ContainerStarted","Data":"90a25c69366708b7f4136d61c7f5e2ced7cf3151a4631de8ca3d02320e9c1f7a"} Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.175054 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.176281 4748 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-6x2nh container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" start-of-body= Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.176342 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" podUID="9060b85c-a31e-4caa-9552-e2d2a4e7cba5" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": dial tcp 10.217.0.77:8081: connect: connection refused" Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.176766 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" event={"ID":"c3019993-87d0-4427-a928-fa01e0a0f419","Type":"ContainerStarted","Data":"bb528a110df97be46f63dd5074c13eec6664f47f6c56627cb31d74bbab83b9fe"} Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.176892 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.178308 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" event={"ID":"e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c","Type":"ContainerStarted","Data":"6d0740b98c00d8ebe2d64379a42f4fbef84ffd9df7f97329cd6cebbc6c8a0b8b"} Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.180123 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" event={"ID":"a879c65c-71d1-4772-b4cc-6d30cbc5210f","Type":"ContainerStarted","Data":"f4a33b8b6e8120960a7978a1f538d384c0e0bf4301e01a52de0c6c27dd57506c"} Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.195277 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-6scp6" podStartSLOduration=1.776030294 podStartE2EDuration="12.195251676s" podCreationTimestamp="2026-02-16 15:04:32 +0000 UTC" firstStartedPulling="2026-02-16 15:04:33.206842126 +0000 UTC m=+698.898511165" lastFinishedPulling="2026-02-16 15:04:43.626063508 +0000 UTC m=+709.317732547" observedRunningTime="2026-02-16 15:04:44.1888738 +0000 UTC m=+709.880542839" watchObservedRunningTime="2026-02-16 15:04:44.195251676 +0000 UTC m=+709.886920715" Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.215293 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt" podStartSLOduration=1.928085862 podStartE2EDuration="12.215274467s" podCreationTimestamp="2026-02-16 15:04:32 +0000 UTC" firstStartedPulling="2026-02-16 15:04:33.33874832 +0000 UTC m=+699.030417349" lastFinishedPulling="2026-02-16 15:04:43.625936915 +0000 UTC m=+709.317605954" observedRunningTime="2026-02-16 15:04:44.213258878 +0000 UTC m=+709.904927917" watchObservedRunningTime="2026-02-16 15:04:44.215274467 +0000 UTC m=+709.906943516" Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.231295 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" podStartSLOduration=2.018648452 podStartE2EDuration="12.231269859s" podCreationTimestamp="2026-02-16 15:04:32 +0000 UTC" firstStartedPulling="2026-02-16 15:04:33.504124335 +0000 UTC m=+699.195793374" lastFinishedPulling="2026-02-16 15:04:43.716745742 +0000 UTC m=+709.408414781" observedRunningTime="2026-02-16 15:04:44.230205583 +0000 UTC m=+709.921874622" watchObservedRunningTime="2026-02-16 15:04:44.231269859 +0000 UTC m=+709.922938898" Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.264490 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7" podStartSLOduration=1.830427717 podStartE2EDuration="12.264460863s" podCreationTimestamp="2026-02-16 15:04:32 +0000 UTC" firstStartedPulling="2026-02-16 15:04:33.191952931 +0000 UTC m=+698.883621970" lastFinishedPulling="2026-02-16 15:04:43.625986077 +0000 UTC m=+709.317655116" observedRunningTime="2026-02-16 15:04:44.25493961 +0000 UTC m=+709.946608739" watchObservedRunningTime="2026-02-16 15:04:44.264460863 +0000 UTC m=+709.956129902" Feb 16 15:04:44 crc kubenswrapper[4748]: I0216 15:04:44.289483 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" podStartSLOduration=2.021280337 podStartE2EDuration="12.289453136s" podCreationTimestamp="2026-02-16 15:04:32 +0000 UTC" firstStartedPulling="2026-02-16 15:04:33.357754986 +0000 UTC m=+699.049424015" lastFinishedPulling="2026-02-16 15:04:43.625927765 +0000 UTC m=+709.317596814" observedRunningTime="2026-02-16 15:04:44.288049522 +0000 UTC m=+709.979718561" watchObservedRunningTime="2026-02-16 15:04:44.289453136 +0000 UTC m=+709.981122175" Feb 16 15:04:45 crc kubenswrapper[4748]: I0216 15:04:45.187646 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-6x2nh" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034119 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r662f"] Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034568 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-controller" containerID="cri-o://65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034668 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034694 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="nbdb" containerID="cri-o://6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034752 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="sbdb" containerID="cri-o://4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034809 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-node" containerID="cri-o://0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034877 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-acl-logging" containerID="cri-o://39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.034809 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="northd" containerID="cri-o://c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.076027 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" containerID="cri-o://c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" gracePeriod=30 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.209780 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/3.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.212922 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovn-acl-logging/0.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.213478 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovn-controller/0.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.213899 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" exitCode=0 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.213940 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" exitCode=0 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.213951 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" exitCode=143 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.213963 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" exitCode=143 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.214034 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e"} Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.214076 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8"} Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.214087 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8"} Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.214101 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891"} Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.220764 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/2.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.221341 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/1.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.221402 4748 generic.go:334] "Generic (PLEG): container finished" podID="1724aef8-25e0-40aa-86be-2ca7849960f1" containerID="d9bd7e0c3f7c247fb4b65a2732e952b3dbcbebd8f74cfd56d321817b97bdd8bb" exitCode=2 Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.221451 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerDied","Data":"d9bd7e0c3f7c247fb4b65a2732e952b3dbcbebd8f74cfd56d321817b97bdd8bb"} Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.221505 4748 scope.go:117] "RemoveContainer" containerID="0ad23c17d54e7864b60e8f1c1b50e3c2ba3cc07bfba626d763cd1232051e8659" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.222362 4748 scope.go:117] "RemoveContainer" containerID="d9bd7e0c3f7c247fb4b65a2732e952b3dbcbebd8f74cfd56d321817b97bdd8bb" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.222609 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dw679_openshift-multus(1724aef8-25e0-40aa-86be-2ca7849960f1)\"" pod="openshift-multus/multus-dw679" podUID="1724aef8-25e0-40aa-86be-2ca7849960f1" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.402379 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/3.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.408625 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovn-acl-logging/0.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.410859 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovn-controller/0.log" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.411426 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468607 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bhbtk"] Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468844 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-node" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468859 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-node" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468873 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="northd" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468880 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="northd" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468887 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468895 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468902 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468908 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468916 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kubecfg-setup" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468922 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kubecfg-setup" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468932 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468938 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468946 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468955 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468965 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-acl-logging" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468972 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-acl-logging" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468979 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.468985 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.468994 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="sbdb" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469000 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="sbdb" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.469007 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="nbdb" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469014 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="nbdb" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469105 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="northd" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469115 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469122 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="nbdb" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469131 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469138 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469146 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469156 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovn-acl-logging" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469162 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="kube-rbac-proxy-node" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469169 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469177 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="sbdb" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469183 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.469266 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469273 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469365 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: E0216 15:04:47.469459 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.469467 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerName="ovnkube-controller" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.470947 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538288 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-netd\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538338 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-script-lib\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538362 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67j69\" (UniqueName: \"kubernetes.io/projected/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-kube-api-access-67j69\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538377 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-systemd-units\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538397 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-etc-openvswitch\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538421 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-netns\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538437 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-systemd\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538456 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-ovn-kubernetes\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538481 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-slash\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538465 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538499 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovn-node-metrics-cert\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538591 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-node-log\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538578 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538649 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538624 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538678 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-node-log" (OuterVolumeSpecName: "node-log") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538708 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538757 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538779 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-kubelet\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538815 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-bin\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538783 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538808 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-slash" (OuterVolumeSpecName: "host-slash") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538853 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-env-overrides\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538890 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-var-lib-openvswitch\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538921 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-openvswitch\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538962 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-log-socket\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538997 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-config\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539046 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-ovn\") pod \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\" (UID: \"2f88ea54-3399-4d84-bc96-5b7d9575bbf5\") " Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538834 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539286 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-log-socket" (OuterVolumeSpecName: "log-socket") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.538871 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539174 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539247 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539260 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539359 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539365 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539504 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-ovnkube-config\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539554 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-systemd\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539562 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539598 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-var-lib-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539625 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-log-socket\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539693 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-slash\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539751 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.539944 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-node-log\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540024 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-cni-bin\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540117 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-env-overrides\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540138 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-run-ovn-kubernetes\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540171 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-ovn\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540196 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540237 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-kubelet\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540292 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x4xn\" (UniqueName: \"kubernetes.io/projected/319cc918-26be-4962-a33c-55f3059461fc-kube-api-access-7x4xn\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540312 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-systemd-units\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540334 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-run-netns\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540369 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/319cc918-26be-4962-a33c-55f3059461fc-ovn-node-metrics-cert\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540432 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-etc-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540454 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-ovnkube-script-lib\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540473 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-cni-netd\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540553 4748 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540568 4748 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540580 4748 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540590 4748 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540602 4748 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540613 4748 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540623 4748 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540632 4748 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540642 4748 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540650 4748 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540660 4748 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540670 4748 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540680 4748 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540691 4748 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540701 4748 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540730 4748 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.540739 4748 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.552222 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-kube-api-access-67j69" (OuterVolumeSpecName: "kube-api-access-67j69") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "kube-api-access-67j69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.554654 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.558354 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2f88ea54-3399-4d84-bc96-5b7d9575bbf5" (UID: "2f88ea54-3399-4d84-bc96-5b7d9575bbf5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642007 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642059 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-node-log\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642081 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-cni-bin\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642109 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-env-overrides\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642126 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-run-ovn-kubernetes\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642141 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-ovn\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642158 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642179 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-kubelet\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642201 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x4xn\" (UniqueName: \"kubernetes.io/projected/319cc918-26be-4962-a33c-55f3059461fc-kube-api-access-7x4xn\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642216 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-systemd-units\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642235 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-run-netns\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642254 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/319cc918-26be-4962-a33c-55f3059461fc-ovn-node-metrics-cert\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642280 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-etc-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642295 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-ovnkube-script-lib\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642309 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-cni-netd\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642346 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-ovnkube-config\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642364 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-systemd\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642380 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-var-lib-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642400 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-log-socket\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642422 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-slash\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642458 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67j69\" (UniqueName: \"kubernetes.io/projected/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-kube-api-access-67j69\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642472 4748 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642483 4748 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f88ea54-3399-4d84-bc96-5b7d9575bbf5-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642525 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-slash\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642551 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-systemd-units\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642560 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-run-netns\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642617 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642640 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-node-log\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.642661 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-cni-bin\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643243 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-env-overrides\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643279 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-run-ovn-kubernetes\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643306 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-ovn\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643331 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643353 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-kubelet\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643696 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-host-cni-netd\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643804 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-var-lib-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643831 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-run-systemd\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643823 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-etc-openvswitch\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.643859 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/319cc918-26be-4962-a33c-55f3059461fc-log-socket\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.644076 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-ovnkube-script-lib\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.644862 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/319cc918-26be-4962-a33c-55f3059461fc-ovnkube-config\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.648470 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/319cc918-26be-4962-a33c-55f3059461fc-ovn-node-metrics-cert\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.664375 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x4xn\" (UniqueName: \"kubernetes.io/projected/319cc918-26be-4962-a33c-55f3059461fc-kube-api-access-7x4xn\") pod \"ovnkube-node-bhbtk\" (UID: \"319cc918-26be-4962-a33c-55f3059461fc\") " pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:47 crc kubenswrapper[4748]: I0216 15:04:47.790196 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.229142 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovnkube-controller/3.log" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.232420 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovn-acl-logging/0.log" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233160 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-r662f_2f88ea54-3399-4d84-bc96-5b7d9575bbf5/ovn-controller/0.log" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233645 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" exitCode=0 Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233673 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" exitCode=0 Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233685 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" exitCode=0 Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233694 4748 generic.go:334] "Generic (PLEG): container finished" podID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" containerID="c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" exitCode=0 Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233803 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233801 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d"} Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233881 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d"} Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233904 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01"} Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233923 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354"} Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233947 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-r662f" event={"ID":"2f88ea54-3399-4d84-bc96-5b7d9575bbf5","Type":"ContainerDied","Data":"9902f07f2c6bc2c2b6676dc6c7a46a3bc3887f2e44d6b11b872eb627a49fcdd2"} Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.233979 4748 scope.go:117] "RemoveContainer" containerID="c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.236192 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/2.log" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.238825 4748 generic.go:334] "Generic (PLEG): container finished" podID="319cc918-26be-4962-a33c-55f3059461fc" containerID="ca9aac9cb27918d939a4bc94e64cec72ee7ff31798a5e3065dcb1d2017c1b9c8" exitCode=0 Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.238863 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerDied","Data":"ca9aac9cb27918d939a4bc94e64cec72ee7ff31798a5e3065dcb1d2017c1b9c8"} Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.239097 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"ee8b285786cc1793f52fc860c1f94aad86174b68e952d9824195e32caae6fef2"} Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.272190 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.308800 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r662f"] Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.314220 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-r662f"] Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.318657 4748 scope.go:117] "RemoveContainer" containerID="4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.346018 4748 scope.go:117] "RemoveContainer" containerID="6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.367778 4748 scope.go:117] "RemoveContainer" containerID="c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.384186 4748 scope.go:117] "RemoveContainer" containerID="fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.408234 4748 scope.go:117] "RemoveContainer" containerID="0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.426061 4748 scope.go:117] "RemoveContainer" containerID="39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.461944 4748 scope.go:117] "RemoveContainer" containerID="65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.489579 4748 scope.go:117] "RemoveContainer" containerID="9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.508689 4748 scope.go:117] "RemoveContainer" containerID="c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.509327 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": container with ID starting with c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d not found: ID does not exist" containerID="c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.509385 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d"} err="failed to get container status \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": rpc error: code = NotFound desc = could not find container \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": container with ID starting with c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.509419 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.509918 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": container with ID starting with e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e not found: ID does not exist" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.509960 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e"} err="failed to get container status \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": rpc error: code = NotFound desc = could not find container \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": container with ID starting with e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.509991 4748 scope.go:117] "RemoveContainer" containerID="4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.510360 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": container with ID starting with 4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d not found: ID does not exist" containerID="4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.510407 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d"} err="failed to get container status \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": rpc error: code = NotFound desc = could not find container \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": container with ID starting with 4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.510426 4748 scope.go:117] "RemoveContainer" containerID="6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.510775 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": container with ID starting with 6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01 not found: ID does not exist" containerID="6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.510796 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01"} err="failed to get container status \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": rpc error: code = NotFound desc = could not find container \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": container with ID starting with 6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.510811 4748 scope.go:117] "RemoveContainer" containerID="c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.511084 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": container with ID starting with c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354 not found: ID does not exist" containerID="c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.511105 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354"} err="failed to get container status \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": rpc error: code = NotFound desc = could not find container \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": container with ID starting with c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.511118 4748 scope.go:117] "RemoveContainer" containerID="fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.511346 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": container with ID starting with fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e not found: ID does not exist" containerID="fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.511365 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e"} err="failed to get container status \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": rpc error: code = NotFound desc = could not find container \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": container with ID starting with fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.511376 4748 scope.go:117] "RemoveContainer" containerID="0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.511792 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": container with ID starting with 0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8 not found: ID does not exist" containerID="0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.511825 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8"} err="failed to get container status \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": rpc error: code = NotFound desc = could not find container \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": container with ID starting with 0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.511842 4748 scope.go:117] "RemoveContainer" containerID="39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.512108 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": container with ID starting with 39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8 not found: ID does not exist" containerID="39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.512136 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8"} err="failed to get container status \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": rpc error: code = NotFound desc = could not find container \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": container with ID starting with 39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.512154 4748 scope.go:117] "RemoveContainer" containerID="65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.512502 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": container with ID starting with 65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891 not found: ID does not exist" containerID="65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.512521 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891"} err="failed to get container status \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": rpc error: code = NotFound desc = could not find container \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": container with ID starting with 65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.512533 4748 scope.go:117] "RemoveContainer" containerID="9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243" Feb 16 15:04:48 crc kubenswrapper[4748]: E0216 15:04:48.512898 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": container with ID starting with 9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243 not found: ID does not exist" containerID="9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.512930 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243"} err="failed to get container status \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": rpc error: code = NotFound desc = could not find container \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": container with ID starting with 9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.512949 4748 scope.go:117] "RemoveContainer" containerID="c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.513278 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d"} err="failed to get container status \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": rpc error: code = NotFound desc = could not find container \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": container with ID starting with c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.513304 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.513732 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e"} err="failed to get container status \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": rpc error: code = NotFound desc = could not find container \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": container with ID starting with e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.513797 4748 scope.go:117] "RemoveContainer" containerID="4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.514179 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d"} err="failed to get container status \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": rpc error: code = NotFound desc = could not find container \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": container with ID starting with 4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.514210 4748 scope.go:117] "RemoveContainer" containerID="6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.514528 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01"} err="failed to get container status \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": rpc error: code = NotFound desc = could not find container \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": container with ID starting with 6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.514555 4748 scope.go:117] "RemoveContainer" containerID="c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.514967 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354"} err="failed to get container status \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": rpc error: code = NotFound desc = could not find container \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": container with ID starting with c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.514991 4748 scope.go:117] "RemoveContainer" containerID="fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.515274 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e"} err="failed to get container status \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": rpc error: code = NotFound desc = could not find container \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": container with ID starting with fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.515300 4748 scope.go:117] "RemoveContainer" containerID="0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.515654 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8"} err="failed to get container status \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": rpc error: code = NotFound desc = could not find container \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": container with ID starting with 0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.515677 4748 scope.go:117] "RemoveContainer" containerID="39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.515994 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8"} err="failed to get container status \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": rpc error: code = NotFound desc = could not find container \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": container with ID starting with 39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.516012 4748 scope.go:117] "RemoveContainer" containerID="65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.516305 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891"} err="failed to get container status \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": rpc error: code = NotFound desc = could not find container \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": container with ID starting with 65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.516335 4748 scope.go:117] "RemoveContainer" containerID="9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.516677 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243"} err="failed to get container status \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": rpc error: code = NotFound desc = could not find container \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": container with ID starting with 9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.516705 4748 scope.go:117] "RemoveContainer" containerID="c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.517164 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d"} err="failed to get container status \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": rpc error: code = NotFound desc = could not find container \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": container with ID starting with c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.517191 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.517499 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e"} err="failed to get container status \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": rpc error: code = NotFound desc = could not find container \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": container with ID starting with e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.517526 4748 scope.go:117] "RemoveContainer" containerID="4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.517876 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d"} err="failed to get container status \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": rpc error: code = NotFound desc = could not find container \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": container with ID starting with 4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.517903 4748 scope.go:117] "RemoveContainer" containerID="6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.518140 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01"} err="failed to get container status \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": rpc error: code = NotFound desc = could not find container \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": container with ID starting with 6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.518172 4748 scope.go:117] "RemoveContainer" containerID="c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.518589 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354"} err="failed to get container status \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": rpc error: code = NotFound desc = could not find container \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": container with ID starting with c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.518610 4748 scope.go:117] "RemoveContainer" containerID="fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.518917 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e"} err="failed to get container status \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": rpc error: code = NotFound desc = could not find container \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": container with ID starting with fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.518944 4748 scope.go:117] "RemoveContainer" containerID="0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519199 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8"} err="failed to get container status \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": rpc error: code = NotFound desc = could not find container \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": container with ID starting with 0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519215 4748 scope.go:117] "RemoveContainer" containerID="39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519389 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8"} err="failed to get container status \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": rpc error: code = NotFound desc = could not find container \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": container with ID starting with 39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519405 4748 scope.go:117] "RemoveContainer" containerID="65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519586 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891"} err="failed to get container status \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": rpc error: code = NotFound desc = could not find container \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": container with ID starting with 65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519608 4748 scope.go:117] "RemoveContainer" containerID="9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519972 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243"} err="failed to get container status \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": rpc error: code = NotFound desc = could not find container \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": container with ID starting with 9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.519998 4748 scope.go:117] "RemoveContainer" containerID="c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.520357 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d"} err="failed to get container status \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": rpc error: code = NotFound desc = could not find container \"c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d\": container with ID starting with c3388f85ca9b10ebe47b1159d2a4565f92d022801d7d6fa3ce5d767186357c5d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.520460 4748 scope.go:117] "RemoveContainer" containerID="e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.520903 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e"} err="failed to get container status \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": rpc error: code = NotFound desc = could not find container \"e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e\": container with ID starting with e4f08d3d6f0fc3cb5b227786ab1e660420b7a6b7e9934b673d5667ce9f7dfd4e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.520942 4748 scope.go:117] "RemoveContainer" containerID="4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.521394 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d"} err="failed to get container status \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": rpc error: code = NotFound desc = could not find container \"4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d\": container with ID starting with 4c307b25dae2e0ff2458588ae967acc8dcba2ff4b4ba6b2918885def3d3f505d not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.521437 4748 scope.go:117] "RemoveContainer" containerID="6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.521738 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01"} err="failed to get container status \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": rpc error: code = NotFound desc = could not find container \"6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01\": container with ID starting with 6f6e33f2b4aa60871ad0215212eac7d01bac3c737e21ed9fff7b52cc03ea3a01 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.521773 4748 scope.go:117] "RemoveContainer" containerID="c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.522073 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354"} err="failed to get container status \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": rpc error: code = NotFound desc = could not find container \"c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354\": container with ID starting with c3fb98379777df0a69f3faeee78489f7cacb21d2c1ee83000f42639fb8ecd354 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.522099 4748 scope.go:117] "RemoveContainer" containerID="fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.522366 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e"} err="failed to get container status \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": rpc error: code = NotFound desc = could not find container \"fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e\": container with ID starting with fec3209386cf03252c573fcaf35a845bfab02d40b207a3cdd465b4cb3ca52f1e not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.522394 4748 scope.go:117] "RemoveContainer" containerID="0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.523074 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8"} err="failed to get container status \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": rpc error: code = NotFound desc = could not find container \"0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8\": container with ID starting with 0aa7930f7e1eeffb5e2c8e096fb5450a507138df6e7bc0e07e7799b4fddf5cc8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.523238 4748 scope.go:117] "RemoveContainer" containerID="39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.523776 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8"} err="failed to get container status \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": rpc error: code = NotFound desc = could not find container \"39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8\": container with ID starting with 39fc670e71b2bfc553e827b9a0e36b6797a3011f3a17163b3be966e5adaa6fb8 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.523801 4748 scope.go:117] "RemoveContainer" containerID="65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.524244 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891"} err="failed to get container status \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": rpc error: code = NotFound desc = could not find container \"65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891\": container with ID starting with 65513acb8d16e98f88eefced590ce0bbbfd700d75934252c4a374de0ea38b891 not found: ID does not exist" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.524351 4748 scope.go:117] "RemoveContainer" containerID="9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243" Feb 16 15:04:48 crc kubenswrapper[4748]: I0216 15:04:48.524884 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243"} err="failed to get container status \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": rpc error: code = NotFound desc = could not find container \"9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243\": container with ID starting with 9760b1cdd1358ce41a9c45c91f755aeacd6fb342e25f36d38eca38fa359f6243 not found: ID does not exist" Feb 16 15:04:49 crc kubenswrapper[4748]: I0216 15:04:49.009508 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f88ea54-3399-4d84-bc96-5b7d9575bbf5" path="/var/lib/kubelet/pods/2f88ea54-3399-4d84-bc96-5b7d9575bbf5/volumes" Feb 16 15:04:49 crc kubenswrapper[4748]: I0216 15:04:49.268351 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"4371e21fd324cc374c7490816c37ba6c710fe61b78912738e110f6e3243c644d"} Feb 16 15:04:49 crc kubenswrapper[4748]: I0216 15:04:49.268436 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"1edb726e7db4b23918dfeca78b80c0c94ae47034428a70e6f6f2cc820c2b24bf"} Feb 16 15:04:49 crc kubenswrapper[4748]: I0216 15:04:49.268453 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"30fa823f16163e692d3e52a55cacfdbdfab76dbc0ecd1058332fdb6726a425d3"} Feb 16 15:04:49 crc kubenswrapper[4748]: I0216 15:04:49.268467 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"ce9c339370e6deacc60cdfb1bc000ea8e5a661996beea75e870b26ada4304dd2"} Feb 16 15:04:49 crc kubenswrapper[4748]: I0216 15:04:49.268481 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"72a7834f0b35ac739fee983a25c9a45f52d476a147af303a38d7f57663f2a001"} Feb 16 15:04:49 crc kubenswrapper[4748]: I0216 15:04:49.268494 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"182d2105b81f453a82d4606ed11fa9795408a209b29c5604412ed52bf92e7625"} Feb 16 15:04:52 crc kubenswrapper[4748]: I0216 15:04:52.293871 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"bc8fc4443eaf5a8df2cdcf05dbb3d71284116601f1d058c77582e274df2ebc6a"} Feb 16 15:04:53 crc kubenswrapper[4748]: I0216 15:04:53.237489 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-9vb7j" Feb 16 15:04:54 crc kubenswrapper[4748]: I0216 15:04:54.310586 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" event={"ID":"319cc918-26be-4962-a33c-55f3059461fc","Type":"ContainerStarted","Data":"d20336ee14dade31a2a1b53807bd6ca1d065d74e1747f330d0415bae8003b697"} Feb 16 15:04:54 crc kubenswrapper[4748]: I0216 15:04:54.311338 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:54 crc kubenswrapper[4748]: I0216 15:04:54.368215 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" podStartSLOduration=7.368196517 podStartE2EDuration="7.368196517s" podCreationTimestamp="2026-02-16 15:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:04:54.36346491 +0000 UTC m=+720.055133949" watchObservedRunningTime="2026-02-16 15:04:54.368196517 +0000 UTC m=+720.059865566" Feb 16 15:04:54 crc kubenswrapper[4748]: I0216 15:04:54.387383 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.335751 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.335792 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.392581 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.544212 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk"] Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.545199 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.548805 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.565605 4748 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zndd6" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.565933 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.571666 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-rgv2z"] Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.572508 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk"] Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.572606 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.579079 4748 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-srk8r" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.596968 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rgv2z"] Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.601240 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98mzg"] Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.607462 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.621975 4748 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-jrkxd" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.626608 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98mzg"] Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.690367 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m54t\" (UniqueName: \"kubernetes.io/projected/6f321d96-89f7-4e8e-986d-0c1e236a48f3-kube-api-access-2m54t\") pod \"cert-manager-cainjector-cf98fcc89-v7pmk\" (UID: \"6f321d96-89f7-4e8e-986d-0c1e236a48f3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.690443 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xflts\" (UniqueName: \"kubernetes.io/projected/2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d-kube-api-access-xflts\") pod \"cert-manager-858654f9db-rgv2z\" (UID: \"2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d\") " pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.791451 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8zbm\" (UniqueName: \"kubernetes.io/projected/dac776e8-29fc-42fc-ad44-d4f0f16b8ef4-kube-api-access-z8zbm\") pod \"cert-manager-webhook-687f57d79b-98mzg\" (UID: \"dac776e8-29fc-42fc-ad44-d4f0f16b8ef4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.791517 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2m54t\" (UniqueName: \"kubernetes.io/projected/6f321d96-89f7-4e8e-986d-0c1e236a48f3-kube-api-access-2m54t\") pod \"cert-manager-cainjector-cf98fcc89-v7pmk\" (UID: \"6f321d96-89f7-4e8e-986d-0c1e236a48f3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.791561 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xflts\" (UniqueName: \"kubernetes.io/projected/2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d-kube-api-access-xflts\") pod \"cert-manager-858654f9db-rgv2z\" (UID: \"2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d\") " pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.822966 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xflts\" (UniqueName: \"kubernetes.io/projected/2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d-kube-api-access-xflts\") pod \"cert-manager-858654f9db-rgv2z\" (UID: \"2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d\") " pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.824330 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2m54t\" (UniqueName: \"kubernetes.io/projected/6f321d96-89f7-4e8e-986d-0c1e236a48f3-kube-api-access-2m54t\") pod \"cert-manager-cainjector-cf98fcc89-v7pmk\" (UID: \"6f321d96-89f7-4e8e-986d-0c1e236a48f3\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.865725 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.892913 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8zbm\" (UniqueName: \"kubernetes.io/projected/dac776e8-29fc-42fc-ad44-d4f0f16b8ef4-kube-api-access-z8zbm\") pod \"cert-manager-webhook-687f57d79b-98mzg\" (UID: \"dac776e8-29fc-42fc-ad44-d4f0f16b8ef4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.894134 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(e1e81cdb06e2a780bd76da91ed20a900684f0a241db60692a9351bb556eb0ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.894273 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(e1e81cdb06e2a780bd76da91ed20a900684f0a241db60692a9351bb556eb0ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.894338 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(e1e81cdb06e2a780bd76da91ed20a900684f0a241db60692a9351bb556eb0ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.894429 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager(6f321d96-89f7-4e8e-986d-0c1e236a48f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager(6f321d96-89f7-4e8e-986d-0c1e236a48f3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(e1e81cdb06e2a780bd76da91ed20a900684f0a241db60692a9351bb556eb0ebb): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" podUID="6f321d96-89f7-4e8e-986d-0c1e236a48f3" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.897661 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.914309 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8zbm\" (UniqueName: \"kubernetes.io/projected/dac776e8-29fc-42fc-ad44-d4f0f16b8ef4-kube-api-access-z8zbm\") pod \"cert-manager-webhook-687f57d79b-98mzg\" (UID: \"dac776e8-29fc-42fc-ad44-d4f0f16b8ef4\") " pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.924114 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(2876a5c3c1255fb26b56fd09de7d9bf586a215657ba745d875289697f97bd770): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.924181 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(2876a5c3c1255fb26b56fd09de7d9bf586a215657ba745d875289697f97bd770): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.924203 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(2876a5c3c1255fb26b56fd09de7d9bf586a215657ba745d875289697f97bd770): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.924249 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-rgv2z_cert-manager(2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-rgv2z_cert-manager(2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(2876a5c3c1255fb26b56fd09de7d9bf586a215657ba745d875289697f97bd770): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-rgv2z" podUID="2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d" Feb 16 15:04:55 crc kubenswrapper[4748]: I0216 15:04:55.932041 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.961822 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(0a422ff6fa1b6f41f98594e189d1583a7623e8abe2d1dd01da32636f603fab69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.961883 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(0a422ff6fa1b6f41f98594e189d1583a7623e8abe2d1dd01da32636f603fab69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.961903 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(0a422ff6fa1b6f41f98594e189d1583a7623e8abe2d1dd01da32636f603fab69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:55 crc kubenswrapper[4748]: E0216 15:04:55.961946 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-98mzg_cert-manager(dac776e8-29fc-42fc-ad44-d4f0f16b8ef4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-98mzg_cert-manager(dac776e8-29fc-42fc-ad44-d4f0f16b8ef4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(0a422ff6fa1b6f41f98594e189d1583a7623e8abe2d1dd01da32636f603fab69): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" podUID="dac776e8-29fc-42fc-ad44-d4f0f16b8ef4" Feb 16 15:04:56 crc kubenswrapper[4748]: I0216 15:04:56.340503 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:56 crc kubenswrapper[4748]: I0216 15:04:56.340534 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:56 crc kubenswrapper[4748]: I0216 15:04:56.340633 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:56 crc kubenswrapper[4748]: I0216 15:04:56.341002 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:56 crc kubenswrapper[4748]: I0216 15:04:56.341885 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:56 crc kubenswrapper[4748]: I0216 15:04:56.373410 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.415605 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(8f4c7a7091120c0db311e3bdaa898dba9dd1cbeb2016b4105ac1cfbd18d1b0e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.416175 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(8f4c7a7091120c0db311e3bdaa898dba9dd1cbeb2016b4105ac1cfbd18d1b0e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.416205 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(8f4c7a7091120c0db311e3bdaa898dba9dd1cbeb2016b4105ac1cfbd18d1b0e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.416266 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-98mzg_cert-manager(dac776e8-29fc-42fc-ad44-d4f0f16b8ef4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-98mzg_cert-manager(dac776e8-29fc-42fc-ad44-d4f0f16b8ef4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(8f4c7a7091120c0db311e3bdaa898dba9dd1cbeb2016b4105ac1cfbd18d1b0e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" podUID="dac776e8-29fc-42fc-ad44-d4f0f16b8ef4" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.444125 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(c6eb2a5e352f2e71082cbb6eee32edfde465c2774c579fd20443dce4573057e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.444253 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(c6eb2a5e352f2e71082cbb6eee32edfde465c2774c579fd20443dce4573057e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.444317 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(c6eb2a5e352f2e71082cbb6eee32edfde465c2774c579fd20443dce4573057e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.444434 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-rgv2z_cert-manager(2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-rgv2z_cert-manager(2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(c6eb2a5e352f2e71082cbb6eee32edfde465c2774c579fd20443dce4573057e2): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-rgv2z" podUID="2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.468506 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(8edae42f5dc0bbc441e20713e09eee52a58866de04f545fdd0183f782fa53813): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.468641 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(8edae42f5dc0bbc441e20713e09eee52a58866de04f545fdd0183f782fa53813): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.468704 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(8edae42f5dc0bbc441e20713e09eee52a58866de04f545fdd0183f782fa53813): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:04:56 crc kubenswrapper[4748]: E0216 15:04:56.468821 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager(6f321d96-89f7-4e8e-986d-0c1e236a48f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager(6f321d96-89f7-4e8e-986d-0c1e236a48f3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(8edae42f5dc0bbc441e20713e09eee52a58866de04f545fdd0183f782fa53813): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" podUID="6f321d96-89f7-4e8e-986d-0c1e236a48f3" Feb 16 15:04:59 crc kubenswrapper[4748]: I0216 15:04:59.993939 4748 scope.go:117] "RemoveContainer" containerID="d9bd7e0c3f7c247fb4b65a2732e952b3dbcbebd8f74cfd56d321817b97bdd8bb" Feb 16 15:04:59 crc kubenswrapper[4748]: E0216 15:04:59.994663 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-dw679_openshift-multus(1724aef8-25e0-40aa-86be-2ca7849960f1)\"" pod="openshift-multus/multus-dw679" podUID="1724aef8-25e0-40aa-86be-2ca7849960f1" Feb 16 15:05:07 crc kubenswrapper[4748]: I0216 15:05:07.994132 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:05:07 crc kubenswrapper[4748]: I0216 15:05:07.995240 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:05:08 crc kubenswrapper[4748]: E0216 15:05:08.045107 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(a164e7f137d1d1faf31568c5028b4778886e0dc7df536cc9d2d4b78ac73df86e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:05:08 crc kubenswrapper[4748]: E0216 15:05:08.045237 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(a164e7f137d1d1faf31568c5028b4778886e0dc7df536cc9d2d4b78ac73df86e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:05:08 crc kubenswrapper[4748]: E0216 15:05:08.045278 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(a164e7f137d1d1faf31568c5028b4778886e0dc7df536cc9d2d4b78ac73df86e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:05:08 crc kubenswrapper[4748]: E0216 15:05:08.045355 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-858654f9db-rgv2z_cert-manager(2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-858654f9db-rgv2z_cert-manager(2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-858654f9db-rgv2z_cert-manager_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d_0(a164e7f137d1d1faf31568c5028b4778886e0dc7df536cc9d2d4b78ac73df86e): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-858654f9db-rgv2z" podUID="2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d" Feb 16 15:05:09 crc kubenswrapper[4748]: I0216 15:05:09.994257 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:09 crc kubenswrapper[4748]: I0216 15:05:09.995194 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:10 crc kubenswrapper[4748]: E0216 15:05:10.019473 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(d359e55ee8b0979a737098e84d3a243511bbde16d00d8c56d241ed7e7f359ff5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:05:10 crc kubenswrapper[4748]: E0216 15:05:10.019556 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(d359e55ee8b0979a737098e84d3a243511bbde16d00d8c56d241ed7e7f359ff5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:10 crc kubenswrapper[4748]: E0216 15:05:10.019593 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(d359e55ee8b0979a737098e84d3a243511bbde16d00d8c56d241ed7e7f359ff5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:10 crc kubenswrapper[4748]: E0216 15:05:10.019657 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-webhook-687f57d79b-98mzg_cert-manager(dac776e8-29fc-42fc-ad44-d4f0f16b8ef4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-webhook-687f57d79b-98mzg_cert-manager(dac776e8-29fc-42fc-ad44-d4f0f16b8ef4)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-webhook-687f57d79b-98mzg_cert-manager_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4_0(d359e55ee8b0979a737098e84d3a243511bbde16d00d8c56d241ed7e7f359ff5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" podUID="dac776e8-29fc-42fc-ad44-d4f0f16b8ef4" Feb 16 15:05:10 crc kubenswrapper[4748]: I0216 15:05:10.993981 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:05:10 crc kubenswrapper[4748]: I0216 15:05:10.995426 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:05:11 crc kubenswrapper[4748]: E0216 15:05:11.036888 4748 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(47f9eb12da1f569e488a11bdb1f3920f4a2a1d011db2320ccc5e90b51bd36032): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 15:05:11 crc kubenswrapper[4748]: E0216 15:05:11.037033 4748 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(47f9eb12da1f569e488a11bdb1f3920f4a2a1d011db2320ccc5e90b51bd36032): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:05:11 crc kubenswrapper[4748]: E0216 15:05:11.037070 4748 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(47f9eb12da1f569e488a11bdb1f3920f4a2a1d011db2320ccc5e90b51bd36032): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:05:11 crc kubenswrapper[4748]: E0216 15:05:11.037181 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager(6f321d96-89f7-4e8e-986d-0c1e236a48f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager(6f321d96-89f7-4e8e-986d-0c1e236a48f3)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cert-manager-cainjector-cf98fcc89-v7pmk_cert-manager_6f321d96-89f7-4e8e-986d-0c1e236a48f3_0(47f9eb12da1f569e488a11bdb1f3920f4a2a1d011db2320ccc5e90b51bd36032): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" podUID="6f321d96-89f7-4e8e-986d-0c1e236a48f3" Feb 16 15:05:13 crc kubenswrapper[4748]: I0216 15:05:13.995339 4748 scope.go:117] "RemoveContainer" containerID="d9bd7e0c3f7c247fb4b65a2732e952b3dbcbebd8f74cfd56d321817b97bdd8bb" Feb 16 15:05:14 crc kubenswrapper[4748]: I0216 15:05:14.478472 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-dw679_1724aef8-25e0-40aa-86be-2ca7849960f1/kube-multus/2.log" Feb 16 15:05:14 crc kubenswrapper[4748]: I0216 15:05:14.478988 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-dw679" event={"ID":"1724aef8-25e0-40aa-86be-2ca7849960f1","Type":"ContainerStarted","Data":"7fe0751c7b2b88626ea484d25736f1b28d10877581691248fe253585d928b579"} Feb 16 15:05:17 crc kubenswrapper[4748]: I0216 15:05:17.820389 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bhbtk" Feb 16 15:05:20 crc kubenswrapper[4748]: I0216 15:05:20.994096 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:05:20 crc kubenswrapper[4748]: I0216 15:05:20.995289 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-rgv2z" Feb 16 15:05:21 crc kubenswrapper[4748]: I0216 15:05:21.211924 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-rgv2z"] Feb 16 15:05:21 crc kubenswrapper[4748]: I0216 15:05:21.535678 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rgv2z" event={"ID":"2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d","Type":"ContainerStarted","Data":"3b1a7854f52284e06bb1a4002b2f9a30c6ddf26430f086812b19ca4115ec08d7"} Feb 16 15:05:21 crc kubenswrapper[4748]: I0216 15:05:21.993699 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:21 crc kubenswrapper[4748]: I0216 15:05:21.995128 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:22 crc kubenswrapper[4748]: I0216 15:05:22.548649 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-98mzg"] Feb 16 15:05:22 crc kubenswrapper[4748]: W0216 15:05:22.637974 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddac776e8_29fc_42fc_ad44_d4f0f16b8ef4.slice/crio-0628f61fbfeb83ab68aaea59f4623d89bffaa81564973478c8bd4eb274346c13 WatchSource:0}: Error finding container 0628f61fbfeb83ab68aaea59f4623d89bffaa81564973478c8bd4eb274346c13: Status 404 returned error can't find the container with id 0628f61fbfeb83ab68aaea59f4623d89bffaa81564973478c8bd4eb274346c13 Feb 16 15:05:23 crc kubenswrapper[4748]: I0216 15:05:23.557096 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" event={"ID":"dac776e8-29fc-42fc-ad44-d4f0f16b8ef4","Type":"ContainerStarted","Data":"0628f61fbfeb83ab68aaea59f4623d89bffaa81564973478c8bd4eb274346c13"} Feb 16 15:05:24 crc kubenswrapper[4748]: I0216 15:05:24.567367 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-rgv2z" event={"ID":"2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d","Type":"ContainerStarted","Data":"f1df3db4a15e84496ab995c9808c23e6e27078cbd3a903157471584d378734d7"} Feb 16 15:05:24 crc kubenswrapper[4748]: I0216 15:05:24.591755 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-rgv2z" podStartSLOduration=27.194152643 podStartE2EDuration="29.591728932s" podCreationTimestamp="2026-02-16 15:04:55 +0000 UTC" firstStartedPulling="2026-02-16 15:05:21.220159906 +0000 UTC m=+746.911828945" lastFinishedPulling="2026-02-16 15:05:23.617736155 +0000 UTC m=+749.309405234" observedRunningTime="2026-02-16 15:05:24.581200553 +0000 UTC m=+750.272869592" watchObservedRunningTime="2026-02-16 15:05:24.591728932 +0000 UTC m=+750.283397981" Feb 16 15:05:25 crc kubenswrapper[4748]: I0216 15:05:25.581224 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" event={"ID":"dac776e8-29fc-42fc-ad44-d4f0f16b8ef4","Type":"ContainerStarted","Data":"d47635d380af100a01f11109b0d817d2687d714a4df64269f6c47bab403a99c1"} Feb 16 15:05:25 crc kubenswrapper[4748]: I0216 15:05:25.609437 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" podStartSLOduration=28.504469248 podStartE2EDuration="30.609410994s" podCreationTimestamp="2026-02-16 15:04:55 +0000 UTC" firstStartedPulling="2026-02-16 15:05:22.640297695 +0000 UTC m=+748.331966734" lastFinishedPulling="2026-02-16 15:05:24.745239431 +0000 UTC m=+750.436908480" observedRunningTime="2026-02-16 15:05:25.606360129 +0000 UTC m=+751.298029208" watchObservedRunningTime="2026-02-16 15:05:25.609410994 +0000 UTC m=+751.301080073" Feb 16 15:05:25 crc kubenswrapper[4748]: I0216 15:05:25.932762 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:25 crc kubenswrapper[4748]: I0216 15:05:25.994311 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:05:25 crc kubenswrapper[4748]: I0216 15:05:25.994850 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" Feb 16 15:05:26 crc kubenswrapper[4748]: I0216 15:05:26.230611 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk"] Feb 16 15:05:26 crc kubenswrapper[4748]: I0216 15:05:26.594115 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" event={"ID":"6f321d96-89f7-4e8e-986d-0c1e236a48f3","Type":"ContainerStarted","Data":"1445287fc4899fb1752f57a8bfaaf28564c1310784d9619045662a81c7776d97"} Feb 16 15:05:28 crc kubenswrapper[4748]: I0216 15:05:28.612619 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" event={"ID":"6f321d96-89f7-4e8e-986d-0c1e236a48f3","Type":"ContainerStarted","Data":"cc84d2fff344dd443f289f168295d372e12a6db1bae73df7823c595fd6b2f9a0"} Feb 16 15:05:28 crc kubenswrapper[4748]: I0216 15:05:28.631699 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-v7pmk" podStartSLOduration=32.348968626 podStartE2EDuration="33.631668341s" podCreationTimestamp="2026-02-16 15:04:55 +0000 UTC" firstStartedPulling="2026-02-16 15:05:26.237642029 +0000 UTC m=+751.929311068" lastFinishedPulling="2026-02-16 15:05:27.520341744 +0000 UTC m=+753.212010783" observedRunningTime="2026-02-16 15:05:28.629197211 +0000 UTC m=+754.320866300" watchObservedRunningTime="2026-02-16 15:05:28.631668341 +0000 UTC m=+754.323337400" Feb 16 15:05:30 crc kubenswrapper[4748]: I0216 15:05:30.936126 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-98mzg" Feb 16 15:05:31 crc kubenswrapper[4748]: I0216 15:05:31.729109 4748 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.123617 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mprpv"] Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.125407 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.140092 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mprpv"] Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.304287 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-utilities\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.304404 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txf8g\" (UniqueName: \"kubernetes.io/projected/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-kube-api-access-txf8g\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.304805 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-catalog-content\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.406336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txf8g\" (UniqueName: \"kubernetes.io/projected/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-kube-api-access-txf8g\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.406474 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-catalog-content\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.406573 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-utilities\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.407362 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-catalog-content\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.407634 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-utilities\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.432137 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txf8g\" (UniqueName: \"kubernetes.io/projected/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-kube-api-access-txf8g\") pod \"certified-operators-mprpv\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.456695 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.738816 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mprpv"] Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.752007 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:05:34 crc kubenswrapper[4748]: I0216 15:05:34.752095 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:05:35 crc kubenswrapper[4748]: I0216 15:05:35.693842 4748 generic.go:334] "Generic (PLEG): container finished" podID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerID="f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39" exitCode=0 Feb 16 15:05:35 crc kubenswrapper[4748]: I0216 15:05:35.693912 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mprpv" event={"ID":"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a","Type":"ContainerDied","Data":"f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39"} Feb 16 15:05:35 crc kubenswrapper[4748]: I0216 15:05:35.693956 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mprpv" event={"ID":"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a","Type":"ContainerStarted","Data":"ae2c25e33543cb30d4889dfd89625dc9e35ff024b08de86f48c62f59c530c35e"} Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.495035 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52q2j"] Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.496829 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.519577 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52q2j"] Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.542289 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdzw6\" (UniqueName: \"kubernetes.io/projected/c1677d9d-22da-4a41-8720-cfc303fb9da1-kube-api-access-tdzw6\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.542468 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-catalog-content\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.542575 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-utilities\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.643492 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-catalog-content\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.643576 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-utilities\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.643639 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdzw6\" (UniqueName: \"kubernetes.io/projected/c1677d9d-22da-4a41-8720-cfc303fb9da1-kube-api-access-tdzw6\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.644450 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-catalog-content\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.644612 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-utilities\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.671679 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdzw6\" (UniqueName: \"kubernetes.io/projected/c1677d9d-22da-4a41-8720-cfc303fb9da1-kube-api-access-tdzw6\") pod \"community-operators-52q2j\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:36 crc kubenswrapper[4748]: I0216 15:05:36.822704 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:37 crc kubenswrapper[4748]: I0216 15:05:37.304628 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52q2j"] Feb 16 15:05:37 crc kubenswrapper[4748]: I0216 15:05:37.719709 4748 generic.go:334] "Generic (PLEG): container finished" podID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerID="7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2" exitCode=0 Feb 16 15:05:37 crc kubenswrapper[4748]: I0216 15:05:37.719841 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mprpv" event={"ID":"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a","Type":"ContainerDied","Data":"7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2"} Feb 16 15:05:37 crc kubenswrapper[4748]: I0216 15:05:37.724494 4748 generic.go:334] "Generic (PLEG): container finished" podID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerID="e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb" exitCode=0 Feb 16 15:05:37 crc kubenswrapper[4748]: I0216 15:05:37.724542 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52q2j" event={"ID":"c1677d9d-22da-4a41-8720-cfc303fb9da1","Type":"ContainerDied","Data":"e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb"} Feb 16 15:05:37 crc kubenswrapper[4748]: I0216 15:05:37.724572 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52q2j" event={"ID":"c1677d9d-22da-4a41-8720-cfc303fb9da1","Type":"ContainerStarted","Data":"f82a5a50174b527343cd9364ee26a487a8420f2d7e41659985ca77e6ede83543"} Feb 16 15:05:38 crc kubenswrapper[4748]: I0216 15:05:38.734436 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mprpv" event={"ID":"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a","Type":"ContainerStarted","Data":"db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d"} Feb 16 15:05:38 crc kubenswrapper[4748]: I0216 15:05:38.737881 4748 generic.go:334] "Generic (PLEG): container finished" podID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerID="255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9" exitCode=0 Feb 16 15:05:38 crc kubenswrapper[4748]: I0216 15:05:38.737922 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52q2j" event={"ID":"c1677d9d-22da-4a41-8720-cfc303fb9da1","Type":"ContainerDied","Data":"255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9"} Feb 16 15:05:38 crc kubenswrapper[4748]: I0216 15:05:38.755352 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mprpv" podStartSLOduration=2.323308593 podStartE2EDuration="4.755324021s" podCreationTimestamp="2026-02-16 15:05:34 +0000 UTC" firstStartedPulling="2026-02-16 15:05:35.698094913 +0000 UTC m=+761.389763942" lastFinishedPulling="2026-02-16 15:05:38.130110291 +0000 UTC m=+763.821779370" observedRunningTime="2026-02-16 15:05:38.753141788 +0000 UTC m=+764.444810837" watchObservedRunningTime="2026-02-16 15:05:38.755324021 +0000 UTC m=+764.446993070" Feb 16 15:05:39 crc kubenswrapper[4748]: I0216 15:05:39.748521 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52q2j" event={"ID":"c1677d9d-22da-4a41-8720-cfc303fb9da1","Type":"ContainerStarted","Data":"3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de"} Feb 16 15:05:39 crc kubenswrapper[4748]: I0216 15:05:39.774965 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52q2j" podStartSLOduration=2.37788025 podStartE2EDuration="3.774933221s" podCreationTimestamp="2026-02-16 15:05:36 +0000 UTC" firstStartedPulling="2026-02-16 15:05:37.72710904 +0000 UTC m=+763.418778109" lastFinishedPulling="2026-02-16 15:05:39.124162051 +0000 UTC m=+764.815831080" observedRunningTime="2026-02-16 15:05:39.770749298 +0000 UTC m=+765.462418337" watchObservedRunningTime="2026-02-16 15:05:39.774933221 +0000 UTC m=+765.466602300" Feb 16 15:05:44 crc kubenswrapper[4748]: I0216 15:05:44.457605 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:44 crc kubenswrapper[4748]: I0216 15:05:44.458128 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:44 crc kubenswrapper[4748]: I0216 15:05:44.515741 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:44 crc kubenswrapper[4748]: I0216 15:05:44.853068 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:44 crc kubenswrapper[4748]: I0216 15:05:44.928785 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mprpv"] Feb 16 15:05:46 crc kubenswrapper[4748]: I0216 15:05:46.809634 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mprpv" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="registry-server" containerID="cri-o://db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d" gracePeriod=2 Feb 16 15:05:46 crc kubenswrapper[4748]: I0216 15:05:46.823297 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:46 crc kubenswrapper[4748]: I0216 15:05:46.824349 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:46 crc kubenswrapper[4748]: I0216 15:05:46.879072 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.250153 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.433368 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txf8g\" (UniqueName: \"kubernetes.io/projected/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-kube-api-access-txf8g\") pod \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.433509 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-catalog-content\") pod \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.433658 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-utilities\") pod \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\" (UID: \"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a\") " Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.436108 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-utilities" (OuterVolumeSpecName: "utilities") pod "ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" (UID: "ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.439798 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-kube-api-access-txf8g" (OuterVolumeSpecName: "kube-api-access-txf8g") pod "ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" (UID: "ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a"). InnerVolumeSpecName "kube-api-access-txf8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.535767 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.535836 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txf8g\" (UniqueName: \"kubernetes.io/projected/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-kube-api-access-txf8g\") on node \"crc\" DevicePath \"\"" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.544648 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" (UID: "ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.638486 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.822867 4748 generic.go:334] "Generic (PLEG): container finished" podID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerID="db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d" exitCode=0 Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.822998 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mprpv" event={"ID":"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a","Type":"ContainerDied","Data":"db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d"} Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.823108 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mprpv" event={"ID":"ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a","Type":"ContainerDied","Data":"ae2c25e33543cb30d4889dfd89625dc9e35ff024b08de86f48c62f59c530c35e"} Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.823156 4748 scope.go:117] "RemoveContainer" containerID="db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.823035 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mprpv" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.858382 4748 scope.go:117] "RemoveContainer" containerID="7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.887957 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mprpv"] Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.898196 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.899673 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mprpv"] Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.902157 4748 scope.go:117] "RemoveContainer" containerID="f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.932428 4748 scope.go:117] "RemoveContainer" containerID="db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d" Feb 16 15:05:47 crc kubenswrapper[4748]: E0216 15:05:47.932953 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d\": container with ID starting with db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d not found: ID does not exist" containerID="db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.932995 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d"} err="failed to get container status \"db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d\": rpc error: code = NotFound desc = could not find container \"db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d\": container with ID starting with db91a3158aeb0b8dea625dbc36eec81990627b2e338ffc3772ad8f70a50fc62d not found: ID does not exist" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.933025 4748 scope.go:117] "RemoveContainer" containerID="7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2" Feb 16 15:05:47 crc kubenswrapper[4748]: E0216 15:05:47.933359 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2\": container with ID starting with 7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2 not found: ID does not exist" containerID="7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.933390 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2"} err="failed to get container status \"7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2\": rpc error: code = NotFound desc = could not find container \"7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2\": container with ID starting with 7f87c53242a3bb0ed3b2dcf6a6734db6ae43d72b49bc5f2c130e3d1020d912d2 not found: ID does not exist" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.933415 4748 scope.go:117] "RemoveContainer" containerID="f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39" Feb 16 15:05:47 crc kubenswrapper[4748]: E0216 15:05:47.933800 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39\": container with ID starting with f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39 not found: ID does not exist" containerID="f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39" Feb 16 15:05:47 crc kubenswrapper[4748]: I0216 15:05:47.933826 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39"} err="failed to get container status \"f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39\": rpc error: code = NotFound desc = could not find container \"f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39\": container with ID starting with f9590fe397993e9c9a446e686c938b06240e0de7a6eb8da0546d6e1cf11abf39 not found: ID does not exist" Feb 16 15:05:49 crc kubenswrapper[4748]: I0216 15:05:49.006021 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" path="/var/lib/kubelet/pods/ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a/volumes" Feb 16 15:05:50 crc kubenswrapper[4748]: I0216 15:05:50.185789 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52q2j"] Feb 16 15:05:50 crc kubenswrapper[4748]: I0216 15:05:50.849817 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52q2j" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="registry-server" containerID="cri-o://3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de" gracePeriod=2 Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.300450 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.306412 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdzw6\" (UniqueName: \"kubernetes.io/projected/c1677d9d-22da-4a41-8720-cfc303fb9da1-kube-api-access-tdzw6\") pod \"c1677d9d-22da-4a41-8720-cfc303fb9da1\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.306579 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-catalog-content\") pod \"c1677d9d-22da-4a41-8720-cfc303fb9da1\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.306652 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-utilities\") pod \"c1677d9d-22da-4a41-8720-cfc303fb9da1\" (UID: \"c1677d9d-22da-4a41-8720-cfc303fb9da1\") " Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.311928 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-utilities" (OuterVolumeSpecName: "utilities") pod "c1677d9d-22da-4a41-8720-cfc303fb9da1" (UID: "c1677d9d-22da-4a41-8720-cfc303fb9da1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.317970 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1677d9d-22da-4a41-8720-cfc303fb9da1-kube-api-access-tdzw6" (OuterVolumeSpecName: "kube-api-access-tdzw6") pod "c1677d9d-22da-4a41-8720-cfc303fb9da1" (UID: "c1677d9d-22da-4a41-8720-cfc303fb9da1"). InnerVolumeSpecName "kube-api-access-tdzw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.370247 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1677d9d-22da-4a41-8720-cfc303fb9da1" (UID: "c1677d9d-22da-4a41-8720-cfc303fb9da1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.408359 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdzw6\" (UniqueName: \"kubernetes.io/projected/c1677d9d-22da-4a41-8720-cfc303fb9da1-kube-api-access-tdzw6\") on node \"crc\" DevicePath \"\"" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.408397 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.408408 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1677d9d-22da-4a41-8720-cfc303fb9da1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.858030 4748 generic.go:334] "Generic (PLEG): container finished" podID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerID="3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de" exitCode=0 Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.858091 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52q2j" event={"ID":"c1677d9d-22da-4a41-8720-cfc303fb9da1","Type":"ContainerDied","Data":"3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de"} Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.858153 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52q2j" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.859161 4748 scope.go:117] "RemoveContainer" containerID="3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.859136 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52q2j" event={"ID":"c1677d9d-22da-4a41-8720-cfc303fb9da1","Type":"ContainerDied","Data":"f82a5a50174b527343cd9364ee26a487a8420f2d7e41659985ca77e6ede83543"} Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.880008 4748 scope.go:117] "RemoveContainer" containerID="255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.904470 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52q2j"] Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.908051 4748 scope.go:117] "RemoveContainer" containerID="e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.909756 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52q2j"] Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.939216 4748 scope.go:117] "RemoveContainer" containerID="3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de" Feb 16 15:05:51 crc kubenswrapper[4748]: E0216 15:05:51.939860 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de\": container with ID starting with 3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de not found: ID does not exist" containerID="3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.939904 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de"} err="failed to get container status \"3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de\": rpc error: code = NotFound desc = could not find container \"3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de\": container with ID starting with 3c13b5554bb7bed56a0c53c8c86bd493bf8174033e07a7b5cb9b8548c53ef3de not found: ID does not exist" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.939940 4748 scope.go:117] "RemoveContainer" containerID="255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9" Feb 16 15:05:51 crc kubenswrapper[4748]: E0216 15:05:51.940456 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9\": container with ID starting with 255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9 not found: ID does not exist" containerID="255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.940603 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9"} err="failed to get container status \"255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9\": rpc error: code = NotFound desc = could not find container \"255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9\": container with ID starting with 255fb640f8856b05721032a789fcaa6d1d874ceaffd9d3ca7a303f9ce49c9bb9 not found: ID does not exist" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.940753 4748 scope.go:117] "RemoveContainer" containerID="e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb" Feb 16 15:05:51 crc kubenswrapper[4748]: E0216 15:05:51.941277 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb\": container with ID starting with e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb not found: ID does not exist" containerID="e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb" Feb 16 15:05:51 crc kubenswrapper[4748]: I0216 15:05:51.941323 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb"} err="failed to get container status \"e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb\": rpc error: code = NotFound desc = could not find container \"e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb\": container with ID starting with e3a1151a2acf469db42934eebb2883d4e49919d21984ebff72a55d16ee4331cb not found: ID does not exist" Feb 16 15:05:53 crc kubenswrapper[4748]: I0216 15:05:53.001934 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" path="/var/lib/kubelet/pods/c1677d9d-22da-4a41-8720-cfc303fb9da1/volumes" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.366437 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw"] Feb 16 15:05:59 crc kubenswrapper[4748]: E0216 15:05:59.367186 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="extract-utilities" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367203 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="extract-utilities" Feb 16 15:05:59 crc kubenswrapper[4748]: E0216 15:05:59.367217 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="extract-utilities" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367223 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="extract-utilities" Feb 16 15:05:59 crc kubenswrapper[4748]: E0216 15:05:59.367242 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="extract-content" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367248 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="extract-content" Feb 16 15:05:59 crc kubenswrapper[4748]: E0216 15:05:59.367258 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="registry-server" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367264 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="registry-server" Feb 16 15:05:59 crc kubenswrapper[4748]: E0216 15:05:59.367273 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="registry-server" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367281 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="registry-server" Feb 16 15:05:59 crc kubenswrapper[4748]: E0216 15:05:59.367290 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="extract-content" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367299 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="extract-content" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367405 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1677d9d-22da-4a41-8720-cfc303fb9da1" containerName="registry-server" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.367419 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef77b3e2-77c7-4b7c-a754-ad1d52b3e94a" containerName="registry-server" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.368381 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.370843 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.376425 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw"] Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.540050 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvjvh\" (UniqueName: \"kubernetes.io/projected/bb0800a2-3982-4128-adfe-ac7e8700e11d-kube-api-access-fvjvh\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.540618 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.540765 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.642394 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.642485 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvjvh\" (UniqueName: \"kubernetes.io/projected/bb0800a2-3982-4128-adfe-ac7e8700e11d-kube-api-access-fvjvh\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.642519 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.643212 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.643460 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.679818 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvjvh\" (UniqueName: \"kubernetes.io/projected/bb0800a2-3982-4128-adfe-ac7e8700e11d-kube-api-access-fvjvh\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:05:59 crc kubenswrapper[4748]: I0216 15:05:59.687114 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:06:00 crc kubenswrapper[4748]: I0216 15:06:00.169884 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw"] Feb 16 15:06:00 crc kubenswrapper[4748]: I0216 15:06:00.943196 4748 generic.go:334] "Generic (PLEG): container finished" podID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerID="72dafd35ef1a8d14b8727884fa4d78cf1c57210423b1d1c281ad87973adb6fa6" exitCode=0 Feb 16 15:06:00 crc kubenswrapper[4748]: I0216 15:06:00.943370 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" event={"ID":"bb0800a2-3982-4128-adfe-ac7e8700e11d","Type":"ContainerDied","Data":"72dafd35ef1a8d14b8727884fa4d78cf1c57210423b1d1c281ad87973adb6fa6"} Feb 16 15:06:00 crc kubenswrapper[4748]: I0216 15:06:00.943777 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" event={"ID":"bb0800a2-3982-4128-adfe-ac7e8700e11d","Type":"ContainerStarted","Data":"27a5c70e0bc35f67fc21b17ffe3fae40c6cb736490ba7fe3db595b9217970913"} Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.673469 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s9sp4"] Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.675440 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.689410 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9sp4"] Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.797242 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.797961 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.800215 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.801062 4748 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-zqt5f" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.808152 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.811996 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.879663 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-utilities\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.879782 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-catalog-content\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.879853 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7h75\" (UniqueName: \"kubernetes.io/projected/86abd3cb-3262-40ce-8eec-933091ea8de8-kube-api-access-b7h75\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.981448 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7h75\" (UniqueName: \"kubernetes.io/projected/86abd3cb-3262-40ce-8eec-933091ea8de8-kube-api-access-b7h75\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.981516 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmmf\" (UniqueName: \"kubernetes.io/projected/2575357f-5b6f-4fdd-9be0-ff91ca86641a-kube-api-access-7hmmf\") pod \"minio\" (UID: \"2575357f-5b6f-4fdd-9be0-ff91ca86641a\") " pod="minio-dev/minio" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.981539 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\") pod \"minio\" (UID: \"2575357f-5b6f-4fdd-9be0-ff91ca86641a\") " pod="minio-dev/minio" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.981577 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-utilities\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.981600 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-catalog-content\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.982143 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-catalog-content\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:01 crc kubenswrapper[4748]: I0216 15:06:01.982500 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-utilities\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.016730 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7h75\" (UniqueName: \"kubernetes.io/projected/86abd3cb-3262-40ce-8eec-933091ea8de8-kube-api-access-b7h75\") pod \"redhat-operators-s9sp4\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.083466 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmmf\" (UniqueName: \"kubernetes.io/projected/2575357f-5b6f-4fdd-9be0-ff91ca86641a-kube-api-access-7hmmf\") pod \"minio\" (UID: \"2575357f-5b6f-4fdd-9be0-ff91ca86641a\") " pod="minio-dev/minio" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.083530 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\") pod \"minio\" (UID: \"2575357f-5b6f-4fdd-9be0-ff91ca86641a\") " pod="minio-dev/minio" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.089297 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.089350 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\") pod \"minio\" (UID: \"2575357f-5b6f-4fdd-9be0-ff91ca86641a\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0f9166a01e1e8814daea183b28675048bcf325f9f912395957ec444f1fcf1654/globalmount\"" pod="minio-dev/minio" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.106423 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmmf\" (UniqueName: \"kubernetes.io/projected/2575357f-5b6f-4fdd-9be0-ff91ca86641a-kube-api-access-7hmmf\") pod \"minio\" (UID: \"2575357f-5b6f-4fdd-9be0-ff91ca86641a\") " pod="minio-dev/minio" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.135788 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-26be04bc-3eed-43cc-b7f6-97edafdb0fef\") pod \"minio\" (UID: \"2575357f-5b6f-4fdd-9be0-ff91ca86641a\") " pod="minio-dev/minio" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.306234 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.435563 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.665030 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9sp4"] Feb 16 15:06:02 crc kubenswrapper[4748]: W0216 15:06:02.673421 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86abd3cb_3262_40ce_8eec_933091ea8de8.slice/crio-31804db7310c0d2f766ca54be4f2e693fbd7e5f0cbd3559d2d8aad7b51b47cac WatchSource:0}: Error finding container 31804db7310c0d2f766ca54be4f2e693fbd7e5f0cbd3559d2d8aad7b51b47cac: Status 404 returned error can't find the container with id 31804db7310c0d2f766ca54be4f2e693fbd7e5f0cbd3559d2d8aad7b51b47cac Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.691628 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 16 15:06:02 crc kubenswrapper[4748]: W0216 15:06:02.708558 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2575357f_5b6f_4fdd_9be0_ff91ca86641a.slice/crio-a246395c98b19968487bbc96565d2cec6ce9d9738939a9a93851fd56dd2cfaa3 WatchSource:0}: Error finding container a246395c98b19968487bbc96565d2cec6ce9d9738939a9a93851fd56dd2cfaa3: Status 404 returned error can't find the container with id a246395c98b19968487bbc96565d2cec6ce9d9738939a9a93851fd56dd2cfaa3 Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.958602 4748 generic.go:334] "Generic (PLEG): container finished" podID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerID="a71c7e7982687450deb4ef89d3ab7d4c5102b2e56aab249c0c88b8d63002312c" exitCode=0 Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.958731 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" event={"ID":"bb0800a2-3982-4128-adfe-ac7e8700e11d","Type":"ContainerDied","Data":"a71c7e7982687450deb4ef89d3ab7d4c5102b2e56aab249c0c88b8d63002312c"} Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.962034 4748 generic.go:334] "Generic (PLEG): container finished" podID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerID="943511d22737e9c8f62edcc1b981a633547ee58996590651415f0d16ec88448f" exitCode=0 Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.962074 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sp4" event={"ID":"86abd3cb-3262-40ce-8eec-933091ea8de8","Type":"ContainerDied","Data":"943511d22737e9c8f62edcc1b981a633547ee58996590651415f0d16ec88448f"} Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.962238 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sp4" event={"ID":"86abd3cb-3262-40ce-8eec-933091ea8de8","Type":"ContainerStarted","Data":"31804db7310c0d2f766ca54be4f2e693fbd7e5f0cbd3559d2d8aad7b51b47cac"} Feb 16 15:06:02 crc kubenswrapper[4748]: I0216 15:06:02.963579 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2575357f-5b6f-4fdd-9be0-ff91ca86641a","Type":"ContainerStarted","Data":"a246395c98b19968487bbc96565d2cec6ce9d9738939a9a93851fd56dd2cfaa3"} Feb 16 15:06:03 crc kubenswrapper[4748]: I0216 15:06:03.971920 4748 generic.go:334] "Generic (PLEG): container finished" podID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerID="0bd606fcc9b4faf4d79c5e9b2a93d0ac6e5d50d7d774b37a949c63995e745b1b" exitCode=0 Feb 16 15:06:03 crc kubenswrapper[4748]: I0216 15:06:03.971987 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" event={"ID":"bb0800a2-3982-4128-adfe-ac7e8700e11d","Type":"ContainerDied","Data":"0bd606fcc9b4faf4d79c5e9b2a93d0ac6e5d50d7d774b37a949c63995e745b1b"} Feb 16 15:06:03 crc kubenswrapper[4748]: I0216 15:06:03.975095 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sp4" event={"ID":"86abd3cb-3262-40ce-8eec-933091ea8de8","Type":"ContainerStarted","Data":"51f9fd91c10f8fa715361cb25e1958c25d04fec854231c0217c80c7b05bef16e"} Feb 16 15:06:04 crc kubenswrapper[4748]: I0216 15:06:04.739601 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:06:04 crc kubenswrapper[4748]: I0216 15:06:04.739692 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:06:04 crc kubenswrapper[4748]: I0216 15:06:04.987298 4748 generic.go:334] "Generic (PLEG): container finished" podID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerID="51f9fd91c10f8fa715361cb25e1958c25d04fec854231c0217c80c7b05bef16e" exitCode=0 Feb 16 15:06:04 crc kubenswrapper[4748]: I0216 15:06:04.987371 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sp4" event={"ID":"86abd3cb-3262-40ce-8eec-933091ea8de8","Type":"ContainerDied","Data":"51f9fd91c10f8fa715361cb25e1958c25d04fec854231c0217c80c7b05bef16e"} Feb 16 15:06:05 crc kubenswrapper[4748]: I0216 15:06:05.998250 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" event={"ID":"bb0800a2-3982-4128-adfe-ac7e8700e11d","Type":"ContainerDied","Data":"27a5c70e0bc35f67fc21b17ffe3fae40c6cb736490ba7fe3db595b9217970913"} Feb 16 15:06:05 crc kubenswrapper[4748]: I0216 15:06:05.998360 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27a5c70e0bc35f67fc21b17ffe3fae40c6cb736490ba7fe3db595b9217970913" Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.015193 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.161350 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-bundle\") pod \"bb0800a2-3982-4128-adfe-ac7e8700e11d\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.161452 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-util\") pod \"bb0800a2-3982-4128-adfe-ac7e8700e11d\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.161497 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvjvh\" (UniqueName: \"kubernetes.io/projected/bb0800a2-3982-4128-adfe-ac7e8700e11d-kube-api-access-fvjvh\") pod \"bb0800a2-3982-4128-adfe-ac7e8700e11d\" (UID: \"bb0800a2-3982-4128-adfe-ac7e8700e11d\") " Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.162500 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-bundle" (OuterVolumeSpecName: "bundle") pod "bb0800a2-3982-4128-adfe-ac7e8700e11d" (UID: "bb0800a2-3982-4128-adfe-ac7e8700e11d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.171096 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb0800a2-3982-4128-adfe-ac7e8700e11d-kube-api-access-fvjvh" (OuterVolumeSpecName: "kube-api-access-fvjvh") pod "bb0800a2-3982-4128-adfe-ac7e8700e11d" (UID: "bb0800a2-3982-4128-adfe-ac7e8700e11d"). InnerVolumeSpecName "kube-api-access-fvjvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.176391 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-util" (OuterVolumeSpecName: "util") pod "bb0800a2-3982-4128-adfe-ac7e8700e11d" (UID: "bb0800a2-3982-4128-adfe-ac7e8700e11d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.262934 4748 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.262977 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvjvh\" (UniqueName: \"kubernetes.io/projected/bb0800a2-3982-4128-adfe-ac7e8700e11d-kube-api-access-fvjvh\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:06 crc kubenswrapper[4748]: I0216 15:06:06.262995 4748 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb0800a2-3982-4128-adfe-ac7e8700e11d-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:07 crc kubenswrapper[4748]: I0216 15:06:07.003499 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"2575357f-5b6f-4fdd-9be0-ff91ca86641a","Type":"ContainerStarted","Data":"875a5fcbea59c7090259429ae68d13c6bd0a33c367eb2fd6198d3659616f8f9f"} Feb 16 15:06:07 crc kubenswrapper[4748]: I0216 15:06:07.003514 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw" Feb 16 15:06:07 crc kubenswrapper[4748]: I0216 15:06:07.026749 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.494120611 podStartE2EDuration="8.026706051s" podCreationTimestamp="2026-02-16 15:05:59 +0000 UTC" firstStartedPulling="2026-02-16 15:06:02.71121751 +0000 UTC m=+788.402886559" lastFinishedPulling="2026-02-16 15:06:06.24380295 +0000 UTC m=+791.935471999" observedRunningTime="2026-02-16 15:06:07.024341933 +0000 UTC m=+792.716010982" watchObservedRunningTime="2026-02-16 15:06:07.026706051 +0000 UTC m=+792.718375100" Feb 16 15:06:08 crc kubenswrapper[4748]: I0216 15:06:08.015368 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sp4" event={"ID":"86abd3cb-3262-40ce-8eec-933091ea8de8","Type":"ContainerStarted","Data":"ba9a9663d6e05fb23ab7d638524e69e5f551798668218ad415289e6285751183"} Feb 16 15:06:08 crc kubenswrapper[4748]: I0216 15:06:08.046864 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s9sp4" podStartSLOduration=2.950841554 podStartE2EDuration="7.046843324s" podCreationTimestamp="2026-02-16 15:06:01 +0000 UTC" firstStartedPulling="2026-02-16 15:06:02.963625292 +0000 UTC m=+788.655294331" lastFinishedPulling="2026-02-16 15:06:07.059627052 +0000 UTC m=+792.751296101" observedRunningTime="2026-02-16 15:06:08.043811629 +0000 UTC m=+793.735480708" watchObservedRunningTime="2026-02-16 15:06:08.046843324 +0000 UTC m=+793.738512363" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.306427 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.308101 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.348594 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6"] Feb 16 15:06:12 crc kubenswrapper[4748]: E0216 15:06:12.348908 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerName="util" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.348923 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerName="util" Feb 16 15:06:12 crc kubenswrapper[4748]: E0216 15:06:12.348936 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerName="pull" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.348942 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerName="pull" Feb 16 15:06:12 crc kubenswrapper[4748]: E0216 15:06:12.348956 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerName="extract" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.348964 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerName="extract" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.349092 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb0800a2-3982-4128-adfe-ac7e8700e11d" containerName="extract" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.349810 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.354540 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.354882 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.355138 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.355702 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.355969 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-cftl7" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.359079 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.372195 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6"] Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.542783 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-apiservice-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.542841 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-webhook-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.542870 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.543068 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-manager-config\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.543208 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x5ph\" (UniqueName: \"kubernetes.io/projected/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-kube-api-access-8x5ph\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.645146 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-apiservice-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.645345 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-webhook-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.645373 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.645401 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-manager-config\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.645427 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x5ph\" (UniqueName: \"kubernetes.io/projected/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-kube-api-access-8x5ph\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.646570 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-manager-config\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.656000 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.658372 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-webhook-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.658949 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-apiservice-cert\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.668522 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x5ph\" (UniqueName: \"kubernetes.io/projected/0ef4556c-a65b-4be7-9b8e-36f4423e84b1-kube-api-access-8x5ph\") pod \"loki-operator-controller-manager-668f94f855-4cdp6\" (UID: \"0ef4556c-a65b-4be7-9b8e-36f4423e84b1\") " pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:12 crc kubenswrapper[4748]: I0216 15:06:12.676269 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:13 crc kubenswrapper[4748]: I0216 15:06:13.029506 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6"] Feb 16 15:06:13 crc kubenswrapper[4748]: I0216 15:06:13.328586 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" event={"ID":"0ef4556c-a65b-4be7-9b8e-36f4423e84b1","Type":"ContainerStarted","Data":"3a37dab04ce3ace0eca0297de363989f4da8e9fb95b864dd360abc6c6bf7bc8e"} Feb 16 15:06:13 crc kubenswrapper[4748]: I0216 15:06:13.371663 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s9sp4" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="registry-server" probeResult="failure" output=< Feb 16 15:06:13 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:06:13 crc kubenswrapper[4748]: > Feb 16 15:06:19 crc kubenswrapper[4748]: I0216 15:06:19.380001 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" event={"ID":"0ef4556c-a65b-4be7-9b8e-36f4423e84b1","Type":"ContainerStarted","Data":"45819fb17a8f700b84d3163c2ae16688b0a523cfca6acce8fd41ca9b565e5436"} Feb 16 15:06:22 crc kubenswrapper[4748]: I0216 15:06:22.363427 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:22 crc kubenswrapper[4748]: I0216 15:06:22.421910 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:24 crc kubenswrapper[4748]: I0216 15:06:24.669178 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9sp4"] Feb 16 15:06:24 crc kubenswrapper[4748]: I0216 15:06:24.669932 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s9sp4" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="registry-server" containerID="cri-o://ba9a9663d6e05fb23ab7d638524e69e5f551798668218ad415289e6285751183" gracePeriod=2 Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.429530 4748 generic.go:334] "Generic (PLEG): container finished" podID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerID="ba9a9663d6e05fb23ab7d638524e69e5f551798668218ad415289e6285751183" exitCode=0 Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.429748 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sp4" event={"ID":"86abd3cb-3262-40ce-8eec-933091ea8de8","Type":"ContainerDied","Data":"ba9a9663d6e05fb23ab7d638524e69e5f551798668218ad415289e6285751183"} Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.574853 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.610487 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-utilities\") pod \"86abd3cb-3262-40ce-8eec-933091ea8de8\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.610584 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7h75\" (UniqueName: \"kubernetes.io/projected/86abd3cb-3262-40ce-8eec-933091ea8de8-kube-api-access-b7h75\") pod \"86abd3cb-3262-40ce-8eec-933091ea8de8\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.610697 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-catalog-content\") pod \"86abd3cb-3262-40ce-8eec-933091ea8de8\" (UID: \"86abd3cb-3262-40ce-8eec-933091ea8de8\") " Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.612451 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-utilities" (OuterVolumeSpecName: "utilities") pod "86abd3cb-3262-40ce-8eec-933091ea8de8" (UID: "86abd3cb-3262-40ce-8eec-933091ea8de8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.617649 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86abd3cb-3262-40ce-8eec-933091ea8de8-kube-api-access-b7h75" (OuterVolumeSpecName: "kube-api-access-b7h75") pod "86abd3cb-3262-40ce-8eec-933091ea8de8" (UID: "86abd3cb-3262-40ce-8eec-933091ea8de8"). InnerVolumeSpecName "kube-api-access-b7h75". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.712760 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.712795 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7h75\" (UniqueName: \"kubernetes.io/projected/86abd3cb-3262-40ce-8eec-933091ea8de8-kube-api-access-b7h75\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.764041 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86abd3cb-3262-40ce-8eec-933091ea8de8" (UID: "86abd3cb-3262-40ce-8eec-933091ea8de8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:06:25 crc kubenswrapper[4748]: I0216 15:06:25.814958 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86abd3cb-3262-40ce-8eec-933091ea8de8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.440103 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" event={"ID":"0ef4556c-a65b-4be7-9b8e-36f4423e84b1","Type":"ContainerStarted","Data":"d7cf478a2307f00171c2ec69fd3a0312f7965b64cebfa98d2787759e1ea0b298"} Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.440562 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.442606 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9sp4" event={"ID":"86abd3cb-3262-40ce-8eec-933091ea8de8","Type":"ContainerDied","Data":"31804db7310c0d2f766ca54be4f2e693fbd7e5f0cbd3559d2d8aad7b51b47cac"} Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.442650 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9sp4" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.442672 4748 scope.go:117] "RemoveContainer" containerID="ba9a9663d6e05fb23ab7d638524e69e5f551798668218ad415289e6285751183" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.443435 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.471290 4748 scope.go:117] "RemoveContainer" containerID="51f9fd91c10f8fa715361cb25e1958c25d04fec854231c0217c80c7b05bef16e" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.471750 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-668f94f855-4cdp6" podStartSLOduration=2.110406748 podStartE2EDuration="14.471736129s" podCreationTimestamp="2026-02-16 15:06:12 +0000 UTC" firstStartedPulling="2026-02-16 15:06:13.039395313 +0000 UTC m=+798.731064352" lastFinishedPulling="2026-02-16 15:06:25.400724694 +0000 UTC m=+811.092393733" observedRunningTime="2026-02-16 15:06:26.46607658 +0000 UTC m=+812.157745619" watchObservedRunningTime="2026-02-16 15:06:26.471736129 +0000 UTC m=+812.163405178" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.496802 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9sp4"] Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.502379 4748 scope.go:117] "RemoveContainer" containerID="943511d22737e9c8f62edcc1b981a633547ee58996590651415f0d16ec88448f" Feb 16 15:06:26 crc kubenswrapper[4748]: I0216 15:06:26.511321 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s9sp4"] Feb 16 15:06:27 crc kubenswrapper[4748]: I0216 15:06:27.002645 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" path="/var/lib/kubelet/pods/86abd3cb-3262-40ce-8eec-933091ea8de8/volumes" Feb 16 15:06:34 crc kubenswrapper[4748]: I0216 15:06:34.729163 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:06:34 crc kubenswrapper[4748]: I0216 15:06:34.729681 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:06:34 crc kubenswrapper[4748]: I0216 15:06:34.729752 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:06:34 crc kubenswrapper[4748]: I0216 15:06:34.730398 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4e49a4e16e48e1a2e71e7ad48688259341e07a21e8d2998708ad1242f0a4ff61"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:06:34 crc kubenswrapper[4748]: I0216 15:06:34.730477 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://4e49a4e16e48e1a2e71e7ad48688259341e07a21e8d2998708ad1242f0a4ff61" gracePeriod=600 Feb 16 15:06:35 crc kubenswrapper[4748]: I0216 15:06:35.512202 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="4e49a4e16e48e1a2e71e7ad48688259341e07a21e8d2998708ad1242f0a4ff61" exitCode=0 Feb 16 15:06:35 crc kubenswrapper[4748]: I0216 15:06:35.512263 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"4e49a4e16e48e1a2e71e7ad48688259341e07a21e8d2998708ad1242f0a4ff61"} Feb 16 15:06:35 crc kubenswrapper[4748]: I0216 15:06:35.512685 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"67a90ffaffa57c523d924f86672f533c001cdd9525b4878908f13274a4bee682"} Feb 16 15:06:35 crc kubenswrapper[4748]: I0216 15:06:35.512760 4748 scope.go:117] "RemoveContainer" containerID="e3a50946846049e1054f070fd6aacdb230e1f890e503599113e81908b3aa8a60" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.053421 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26"] Feb 16 15:06:59 crc kubenswrapper[4748]: E0216 15:06:59.054700 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="extract-content" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.054749 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="extract-content" Feb 16 15:06:59 crc kubenswrapper[4748]: E0216 15:06:59.054787 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="registry-server" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.054803 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="registry-server" Feb 16 15:06:59 crc kubenswrapper[4748]: E0216 15:06:59.054826 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="extract-utilities" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.054843 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="extract-utilities" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.055087 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="86abd3cb-3262-40ce-8eec-933091ea8de8" containerName="registry-server" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.056929 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.060884 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.061126 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26"] Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.174474 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf5zt\" (UniqueName: \"kubernetes.io/projected/77b55606-9c38-4ca2-8192-d8845aa50a7e-kube-api-access-rf5zt\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.174543 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.174636 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.276677 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf5zt\" (UniqueName: \"kubernetes.io/projected/77b55606-9c38-4ca2-8192-d8845aa50a7e-kube-api-access-rf5zt\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.276784 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.276837 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.277350 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-util\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.277496 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-bundle\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.303938 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf5zt\" (UniqueName: \"kubernetes.io/projected/77b55606-9c38-4ca2-8192-d8845aa50a7e-kube-api-access-rf5zt\") pod \"f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:06:59 crc kubenswrapper[4748]: I0216 15:06:59.386326 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:07:00 crc kubenswrapper[4748]: I0216 15:07:00.008625 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26"] Feb 16 15:07:00 crc kubenswrapper[4748]: W0216 15:07:00.017102 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b55606_9c38_4ca2_8192_d8845aa50a7e.slice/crio-11077c0345cbbb117a4b9c034cec4f61e9156c95dfda53d55a500b9efdefe125 WatchSource:0}: Error finding container 11077c0345cbbb117a4b9c034cec4f61e9156c95dfda53d55a500b9efdefe125: Status 404 returned error can't find the container with id 11077c0345cbbb117a4b9c034cec4f61e9156c95dfda53d55a500b9efdefe125 Feb 16 15:07:00 crc kubenswrapper[4748]: I0216 15:07:00.720217 4748 generic.go:334] "Generic (PLEG): container finished" podID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerID="fc3d0d3ba3560af0dae69435defe8be373aec38bf5d8f4a9d2c4e7883bff1ed3" exitCode=0 Feb 16 15:07:00 crc kubenswrapper[4748]: I0216 15:07:00.720283 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" event={"ID":"77b55606-9c38-4ca2-8192-d8845aa50a7e","Type":"ContainerDied","Data":"fc3d0d3ba3560af0dae69435defe8be373aec38bf5d8f4a9d2c4e7883bff1ed3"} Feb 16 15:07:00 crc kubenswrapper[4748]: I0216 15:07:00.720318 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" event={"ID":"77b55606-9c38-4ca2-8192-d8845aa50a7e","Type":"ContainerStarted","Data":"11077c0345cbbb117a4b9c034cec4f61e9156c95dfda53d55a500b9efdefe125"} Feb 16 15:07:02 crc kubenswrapper[4748]: I0216 15:07:02.739845 4748 generic.go:334] "Generic (PLEG): container finished" podID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerID="6ebf94787e312d43b8cdb520d4f3638440c3ceb7991bb21afd37069f84d03086" exitCode=0 Feb 16 15:07:02 crc kubenswrapper[4748]: I0216 15:07:02.739979 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" event={"ID":"77b55606-9c38-4ca2-8192-d8845aa50a7e","Type":"ContainerDied","Data":"6ebf94787e312d43b8cdb520d4f3638440c3ceb7991bb21afd37069f84d03086"} Feb 16 15:07:03 crc kubenswrapper[4748]: I0216 15:07:03.753977 4748 generic.go:334] "Generic (PLEG): container finished" podID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerID="cf5202a30040785e7f1df3f9322b93e5627f0e68294a3fd840afdcd391b53f5d" exitCode=0 Feb 16 15:07:03 crc kubenswrapper[4748]: I0216 15:07:03.754565 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" event={"ID":"77b55606-9c38-4ca2-8192-d8845aa50a7e","Type":"ContainerDied","Data":"cf5202a30040785e7f1df3f9322b93e5627f0e68294a3fd840afdcd391b53f5d"} Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.090661 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.202433 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-util\") pod \"77b55606-9c38-4ca2-8192-d8845aa50a7e\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.202566 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf5zt\" (UniqueName: \"kubernetes.io/projected/77b55606-9c38-4ca2-8192-d8845aa50a7e-kube-api-access-rf5zt\") pod \"77b55606-9c38-4ca2-8192-d8845aa50a7e\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.202612 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-bundle\") pod \"77b55606-9c38-4ca2-8192-d8845aa50a7e\" (UID: \"77b55606-9c38-4ca2-8192-d8845aa50a7e\") " Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.203455 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-bundle" (OuterVolumeSpecName: "bundle") pod "77b55606-9c38-4ca2-8192-d8845aa50a7e" (UID: "77b55606-9c38-4ca2-8192-d8845aa50a7e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.210987 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b55606-9c38-4ca2-8192-d8845aa50a7e-kube-api-access-rf5zt" (OuterVolumeSpecName: "kube-api-access-rf5zt") pod "77b55606-9c38-4ca2-8192-d8845aa50a7e" (UID: "77b55606-9c38-4ca2-8192-d8845aa50a7e"). InnerVolumeSpecName "kube-api-access-rf5zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.216344 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-util" (OuterVolumeSpecName: "util") pod "77b55606-9c38-4ca2-8192-d8845aa50a7e" (UID: "77b55606-9c38-4ca2-8192-d8845aa50a7e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.304636 4748 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.304675 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rf5zt\" (UniqueName: \"kubernetes.io/projected/77b55606-9c38-4ca2-8192-d8845aa50a7e-kube-api-access-rf5zt\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.304687 4748 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/77b55606-9c38-4ca2-8192-d8845aa50a7e-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.769480 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" event={"ID":"77b55606-9c38-4ca2-8192-d8845aa50a7e","Type":"ContainerDied","Data":"11077c0345cbbb117a4b9c034cec4f61e9156c95dfda53d55a500b9efdefe125"} Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.769522 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11077c0345cbbb117a4b9c034cec4f61e9156c95dfda53d55a500b9efdefe125" Feb 16 15:07:05 crc kubenswrapper[4748]: I0216 15:07:05.769541 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.071084 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bbfnc"] Feb 16 15:07:11 crc kubenswrapper[4748]: E0216 15:07:11.071823 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerName="extract" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.071835 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerName="extract" Feb 16 15:07:11 crc kubenswrapper[4748]: E0216 15:07:11.071852 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerName="util" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.071857 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerName="util" Feb 16 15:07:11 crc kubenswrapper[4748]: E0216 15:07:11.071870 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerName="pull" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.071876 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerName="pull" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.071974 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b55606-9c38-4ca2-8192-d8845aa50a7e" containerName="extract" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.072354 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.075609 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.075626 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-95rr4" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.075799 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.097900 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bbfnc"] Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.195765 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbpr8\" (UniqueName: \"kubernetes.io/projected/120dfea6-1405-433f-bf1f-11903bc821e8-kube-api-access-fbpr8\") pod \"nmstate-operator-694c9596b7-bbfnc\" (UID: \"120dfea6-1405-433f-bf1f-11903bc821e8\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.297615 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbpr8\" (UniqueName: \"kubernetes.io/projected/120dfea6-1405-433f-bf1f-11903bc821e8-kube-api-access-fbpr8\") pod \"nmstate-operator-694c9596b7-bbfnc\" (UID: \"120dfea6-1405-433f-bf1f-11903bc821e8\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.329418 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbpr8\" (UniqueName: \"kubernetes.io/projected/120dfea6-1405-433f-bf1f-11903bc821e8-kube-api-access-fbpr8\") pod \"nmstate-operator-694c9596b7-bbfnc\" (UID: \"120dfea6-1405-433f-bf1f-11903bc821e8\") " pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.393682 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.610143 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-694c9596b7-bbfnc"] Feb 16 15:07:11 crc kubenswrapper[4748]: I0216 15:07:11.809573 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" event={"ID":"120dfea6-1405-433f-bf1f-11903bc821e8","Type":"ContainerStarted","Data":"6cfbc758ece587d2b89fdbfca623895fe8fb91d4ec64f32ccee08d98c742e6bd"} Feb 16 15:07:14 crc kubenswrapper[4748]: I0216 15:07:14.840785 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" event={"ID":"120dfea6-1405-433f-bf1f-11903bc821e8","Type":"ContainerStarted","Data":"97a2edcfee14500c38b255c417abca469046c0d534050b7c5ab68dcecae51100"} Feb 16 15:07:14 crc kubenswrapper[4748]: I0216 15:07:14.863009 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-694c9596b7-bbfnc" podStartSLOduration=1.785378733 podStartE2EDuration="3.862979671s" podCreationTimestamp="2026-02-16 15:07:11 +0000 UTC" firstStartedPulling="2026-02-16 15:07:11.617161948 +0000 UTC m=+857.308830997" lastFinishedPulling="2026-02-16 15:07:13.694762896 +0000 UTC m=+859.386431935" observedRunningTime="2026-02-16 15:07:14.861093785 +0000 UTC m=+860.552762834" watchObservedRunningTime="2026-02-16 15:07:14.862979671 +0000 UTC m=+860.554648730" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.234284 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-bh57n"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.236262 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.239445 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-cll59"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.240489 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.240690 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-97bzf" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.242300 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.257947 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-bh57n"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.267750 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knl7k\" (UniqueName: \"kubernetes.io/projected/ba9f32c6-5016-44c6-a2e9-1f8424f92e0a-kube-api-access-knl7k\") pod \"nmstate-metrics-58c85c668d-bh57n\" (UID: \"ba9f32c6-5016-44c6-a2e9-1f8424f92e0a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.275593 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-lbl2g"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.276690 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.288037 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-cll59"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.369930 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/65a2e9f8-c446-48be-a887-4da74a413a77-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-cll59\" (UID: \"65a2e9f8-c446-48be-a887-4da74a413a77\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.369998 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57v82\" (UniqueName: \"kubernetes.io/projected/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-kube-api-access-57v82\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.370080 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps86m\" (UniqueName: \"kubernetes.io/projected/65a2e9f8-c446-48be-a887-4da74a413a77-kube-api-access-ps86m\") pod \"nmstate-webhook-866bcb46dc-cll59\" (UID: \"65a2e9f8-c446-48be-a887-4da74a413a77\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.370113 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-ovs-socket\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.370155 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knl7k\" (UniqueName: \"kubernetes.io/projected/ba9f32c6-5016-44c6-a2e9-1f8424f92e0a-kube-api-access-knl7k\") pod \"nmstate-metrics-58c85c668d-bh57n\" (UID: \"ba9f32c6-5016-44c6-a2e9-1f8424f92e0a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.370182 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-nmstate-lock\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.370206 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-dbus-socket\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.396803 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knl7k\" (UniqueName: \"kubernetes.io/projected/ba9f32c6-5016-44c6-a2e9-1f8424f92e0a-kube-api-access-knl7k\") pod \"nmstate-metrics-58c85c668d-bh57n\" (UID: \"ba9f32c6-5016-44c6-a2e9-1f8424f92e0a\") " pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.397453 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.398792 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.408107 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.408466 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2sq8z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.408629 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.414247 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471349 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps86m\" (UniqueName: \"kubernetes.io/projected/65a2e9f8-c446-48be-a887-4da74a413a77-kube-api-access-ps86m\") pod \"nmstate-webhook-866bcb46dc-cll59\" (UID: \"65a2e9f8-c446-48be-a887-4da74a413a77\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471419 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-ovs-socket\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471470 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-nmstate-lock\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471510 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-dbus-socket\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471526 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-ovs-socket\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471538 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rhrb\" (UniqueName: \"kubernetes.io/projected/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-kube-api-access-8rhrb\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471566 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471602 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471653 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/65a2e9f8-c446-48be-a887-4da74a413a77-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-cll59\" (UID: \"65a2e9f8-c446-48be-a887-4da74a413a77\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.471685 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57v82\" (UniqueName: \"kubernetes.io/projected/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-kube-api-access-57v82\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.472003 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-dbus-socket\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.472022 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-nmstate-lock\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.494222 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/65a2e9f8-c446-48be-a887-4da74a413a77-tls-key-pair\") pod \"nmstate-webhook-866bcb46dc-cll59\" (UID: \"65a2e9f8-c446-48be-a887-4da74a413a77\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.495386 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps86m\" (UniqueName: \"kubernetes.io/projected/65a2e9f8-c446-48be-a887-4da74a413a77-kube-api-access-ps86m\") pod \"nmstate-webhook-866bcb46dc-cll59\" (UID: \"65a2e9f8-c446-48be-a887-4da74a413a77\") " pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.498571 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57v82\" (UniqueName: \"kubernetes.io/projected/b22e9707-ef85-4f3f-81aa-bd2a419a4a28-kube-api-access-57v82\") pod \"nmstate-handler-lbl2g\" (UID: \"b22e9707-ef85-4f3f-81aa-bd2a419a4a28\") " pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.572869 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.572920 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rhrb\" (UniqueName: \"kubernetes.io/projected/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-kube-api-access-8rhrb\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.572957 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.573534 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.573990 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-nginx-conf\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.588026 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-plugin-serving-cert\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.591543 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.610295 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.636488 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rhrb\" (UniqueName: \"kubernetes.io/projected/7d0947f4-e6d9-46c9-b7d1-2dc2c788d855-kube-api-access-8rhrb\") pod \"nmstate-console-plugin-5c78fc5d65-2j5kv\" (UID: \"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855\") " pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.704760 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-757c9676d8-lwl7z"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.705767 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.743281 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.745011 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-757c9676d8-lwl7z"] Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.780605 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx657\" (UniqueName: \"kubernetes.io/projected/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-kube-api-access-lx657\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.780783 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-config\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.780809 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-trusted-ca-bundle\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.780839 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-service-ca\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.780862 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-oauth-serving-cert\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.780881 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-oauth-config\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.780913 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-serving-cert\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.882151 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-oauth-serving-cert\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.882511 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-oauth-config\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.882559 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-serving-cert\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.882599 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx657\" (UniqueName: \"kubernetes.io/projected/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-kube-api-access-lx657\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.882667 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-config\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.882696 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-trusted-ca-bundle\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.882764 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-service-ca\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.883381 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-oauth-serving-cert\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.883628 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-config\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.883754 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-service-ca\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.884174 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-trusted-ca-bundle\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.887479 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-oauth-config\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.889749 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-console-serving-cert\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.893145 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lbl2g" event={"ID":"b22e9707-ef85-4f3f-81aa-bd2a419a4a28","Type":"ContainerStarted","Data":"812ca9e913a222f27a244bbb496ca3d7abf70c79556aee665367d86374394975"} Feb 16 15:07:20 crc kubenswrapper[4748]: I0216 15:07:20.910150 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx657\" (UniqueName: \"kubernetes.io/projected/8a1b5b77-d7dd-46f3-aae5-78f50b637a34-kube-api-access-lx657\") pod \"console-757c9676d8-lwl7z\" (UID: \"8a1b5b77-d7dd-46f3-aae5-78f50b637a34\") " pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.036317 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.107074 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv"] Feb 16 15:07:21 crc kubenswrapper[4748]: W0216 15:07:21.122623 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d0947f4_e6d9_46c9_b7d1_2dc2c788d855.slice/crio-dad698b453f4994f5053e9269d23ffc79262f65735b3bc7d49da6403360b3d85 WatchSource:0}: Error finding container dad698b453f4994f5053e9269d23ffc79262f65735b3bc7d49da6403360b3d85: Status 404 returned error can't find the container with id dad698b453f4994f5053e9269d23ffc79262f65735b3bc7d49da6403360b3d85 Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.196791 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-866bcb46dc-cll59"] Feb 16 15:07:21 crc kubenswrapper[4748]: W0216 15:07:21.205453 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65a2e9f8_c446_48be_a887_4da74a413a77.slice/crio-e942fb5716fe92b78170c0c838ae088893e9ae805d7d61754ab0ace71c1df19e WatchSource:0}: Error finding container e942fb5716fe92b78170c0c838ae088893e9ae805d7d61754ab0ace71c1df19e: Status 404 returned error can't find the container with id e942fb5716fe92b78170c0c838ae088893e9ae805d7d61754ab0ace71c1df19e Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.254824 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58c85c668d-bh57n"] Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.259525 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-757c9676d8-lwl7z"] Feb 16 15:07:21 crc kubenswrapper[4748]: W0216 15:07:21.260134 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba9f32c6_5016_44c6_a2e9_1f8424f92e0a.slice/crio-67fdd7e2aa65cc9fd45d58c6d1ff4e389981f8290daa59832eb838d000afa393 WatchSource:0}: Error finding container 67fdd7e2aa65cc9fd45d58c6d1ff4e389981f8290daa59832eb838d000afa393: Status 404 returned error can't find the container with id 67fdd7e2aa65cc9fd45d58c6d1ff4e389981f8290daa59832eb838d000afa393 Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.903856 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" event={"ID":"ba9f32c6-5016-44c6-a2e9-1f8424f92e0a","Type":"ContainerStarted","Data":"67fdd7e2aa65cc9fd45d58c6d1ff4e389981f8290daa59832eb838d000afa393"} Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.905912 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" event={"ID":"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855","Type":"ContainerStarted","Data":"dad698b453f4994f5053e9269d23ffc79262f65735b3bc7d49da6403360b3d85"} Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.907514 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" event={"ID":"65a2e9f8-c446-48be-a887-4da74a413a77","Type":"ContainerStarted","Data":"e942fb5716fe92b78170c0c838ae088893e9ae805d7d61754ab0ace71c1df19e"} Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.909879 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-757c9676d8-lwl7z" event={"ID":"8a1b5b77-d7dd-46f3-aae5-78f50b637a34","Type":"ContainerStarted","Data":"4393a73bc4bc9e0b5a70fd2444d6e67f97bd59787d32aa35455c08143093a867"} Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.909911 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-757c9676d8-lwl7z" event={"ID":"8a1b5b77-d7dd-46f3-aae5-78f50b637a34","Type":"ContainerStarted","Data":"389cc5e2e7c5a22ec2ac9fc946b43cb96cebabf41deea093e29a58fccfa5a11b"} Feb 16 15:07:21 crc kubenswrapper[4748]: I0216 15:07:21.935696 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-757c9676d8-lwl7z" podStartSLOduration=1.935658723 podStartE2EDuration="1.935658723s" podCreationTimestamp="2026-02-16 15:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:07:21.931932912 +0000 UTC m=+867.623601981" watchObservedRunningTime="2026-02-16 15:07:21.935658723 +0000 UTC m=+867.627327762" Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.937755 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" event={"ID":"65a2e9f8-c446-48be-a887-4da74a413a77","Type":"ContainerStarted","Data":"1611e48a9b038d6ed3dd9a597c80b2038dbc4de042f08238e61b5795e3b2586e"} Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.938863 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.943465 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-lbl2g" event={"ID":"b22e9707-ef85-4f3f-81aa-bd2a419a4a28","Type":"ContainerStarted","Data":"c985096accaf8b0db8e72f07b198b5ecb96b045882e68160ede6e65a1ca1d2dc"} Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.944027 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.946007 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" event={"ID":"7d0947f4-e6d9-46c9-b7d1-2dc2c788d855","Type":"ContainerStarted","Data":"34d126345c5bf6a13a888d03069177f758349bdee5c53d6c3c1b1d2f8cefa5ac"} Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.951339 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" event={"ID":"ba9f32c6-5016-44c6-a2e9-1f8424f92e0a","Type":"ContainerStarted","Data":"96a62a6461956eee6de8762029a1920ff92b8830a4abfd559034843edec45eda"} Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.961671 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" podStartSLOduration=2.345677544 podStartE2EDuration="4.961643489s" podCreationTimestamp="2026-02-16 15:07:20 +0000 UTC" firstStartedPulling="2026-02-16 15:07:21.208924229 +0000 UTC m=+866.900593268" lastFinishedPulling="2026-02-16 15:07:23.824890134 +0000 UTC m=+869.516559213" observedRunningTime="2026-02-16 15:07:24.956402681 +0000 UTC m=+870.648071740" watchObservedRunningTime="2026-02-16 15:07:24.961643489 +0000 UTC m=+870.653312528" Feb 16 15:07:24 crc kubenswrapper[4748]: I0216 15:07:24.982662 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5c78fc5d65-2j5kv" podStartSLOduration=2.295076225 podStartE2EDuration="4.982633224s" podCreationTimestamp="2026-02-16 15:07:20 +0000 UTC" firstStartedPulling="2026-02-16 15:07:21.127481583 +0000 UTC m=+866.819150622" lastFinishedPulling="2026-02-16 15:07:23.815038552 +0000 UTC m=+869.506707621" observedRunningTime="2026-02-16 15:07:24.981843204 +0000 UTC m=+870.673512253" watchObservedRunningTime="2026-02-16 15:07:24.982633224 +0000 UTC m=+870.674302273" Feb 16 15:07:25 crc kubenswrapper[4748]: I0216 15:07:25.016168 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-lbl2g" podStartSLOduration=1.9256360890000002 podStartE2EDuration="5.016128715s" podCreationTimestamp="2026-02-16 15:07:20 +0000 UTC" firstStartedPulling="2026-02-16 15:07:20.726950205 +0000 UTC m=+866.418619244" lastFinishedPulling="2026-02-16 15:07:23.817442791 +0000 UTC m=+869.509111870" observedRunningTime="2026-02-16 15:07:25.015270614 +0000 UTC m=+870.706939653" watchObservedRunningTime="2026-02-16 15:07:25.016128715 +0000 UTC m=+870.707797754" Feb 16 15:07:26 crc kubenswrapper[4748]: I0216 15:07:26.966301 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" event={"ID":"ba9f32c6-5016-44c6-a2e9-1f8424f92e0a","Type":"ContainerStarted","Data":"d7095f224ec7e1496cc530abffd52a5f6e22803c10223c9c880f312f00857da8"} Feb 16 15:07:26 crc kubenswrapper[4748]: I0216 15:07:26.997456 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58c85c668d-bh57n" podStartSLOduration=2.01548781 podStartE2EDuration="6.997427071s" podCreationTimestamp="2026-02-16 15:07:20 +0000 UTC" firstStartedPulling="2026-02-16 15:07:21.263597979 +0000 UTC m=+866.955267018" lastFinishedPulling="2026-02-16 15:07:26.24553724 +0000 UTC m=+871.937206279" observedRunningTime="2026-02-16 15:07:26.992655505 +0000 UTC m=+872.684324554" watchObservedRunningTime="2026-02-16 15:07:26.997427071 +0000 UTC m=+872.689096130" Feb 16 15:07:30 crc kubenswrapper[4748]: I0216 15:07:30.649971 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-lbl2g" Feb 16 15:07:31 crc kubenswrapper[4748]: I0216 15:07:31.036999 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:31 crc kubenswrapper[4748]: I0216 15:07:31.037139 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:31 crc kubenswrapper[4748]: I0216 15:07:31.042854 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:32 crc kubenswrapper[4748]: I0216 15:07:32.006986 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-757c9676d8-lwl7z" Feb 16 15:07:32 crc kubenswrapper[4748]: I0216 15:07:32.084497 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xpkc"] Feb 16 15:07:40 crc kubenswrapper[4748]: I0216 15:07:40.599664 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-866bcb46dc-cll59" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.145100 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4xpkc" podUID="1b6ee71e-062d-49b9-b693-665355764e4f" containerName="console" containerID="cri-o://21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0" gracePeriod=15 Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.263275 4748 patch_prober.go:28] interesting pod/console-f9d7485db-4xpkc container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.263373 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-4xpkc" podUID="1b6ee71e-062d-49b9-b693-665355764e4f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.764376 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xpkc_1b6ee71e-062d-49b9-b693-665355764e4f/console/0.log" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.764964 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.827241 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-service-ca\") pod \"1b6ee71e-062d-49b9-b693-665355764e4f\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.827769 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-oauth-serving-cert\") pod \"1b6ee71e-062d-49b9-b693-665355764e4f\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.827819 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-serving-cert\") pod \"1b6ee71e-062d-49b9-b693-665355764e4f\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.827846 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-oauth-config\") pod \"1b6ee71e-062d-49b9-b693-665355764e4f\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.827889 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjv2w\" (UniqueName: \"kubernetes.io/projected/1b6ee71e-062d-49b9-b693-665355764e4f-kube-api-access-mjv2w\") pod \"1b6ee71e-062d-49b9-b693-665355764e4f\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.828045 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-trusted-ca-bundle\") pod \"1b6ee71e-062d-49b9-b693-665355764e4f\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.828081 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-console-config\") pod \"1b6ee71e-062d-49b9-b693-665355764e4f\" (UID: \"1b6ee71e-062d-49b9-b693-665355764e4f\") " Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.828236 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-service-ca" (OuterVolumeSpecName: "service-ca") pod "1b6ee71e-062d-49b9-b693-665355764e4f" (UID: "1b6ee71e-062d-49b9-b693-665355764e4f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.829273 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "1b6ee71e-062d-49b9-b693-665355764e4f" (UID: "1b6ee71e-062d-49b9-b693-665355764e4f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.829284 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1b6ee71e-062d-49b9-b693-665355764e4f" (UID: "1b6ee71e-062d-49b9-b693-665355764e4f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.829399 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-console-config" (OuterVolumeSpecName: "console-config") pod "1b6ee71e-062d-49b9-b693-665355764e4f" (UID: "1b6ee71e-062d-49b9-b693-665355764e4f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.829725 4748 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.829746 4748 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.829756 4748 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.829767 4748 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b6ee71e-062d-49b9-b693-665355764e4f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.836491 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "1b6ee71e-062d-49b9-b693-665355764e4f" (UID: "1b6ee71e-062d-49b9-b693-665355764e4f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.837454 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "1b6ee71e-062d-49b9-b693-665355764e4f" (UID: "1b6ee71e-062d-49b9-b693-665355764e4f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.839264 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b6ee71e-062d-49b9-b693-665355764e4f-kube-api-access-mjv2w" (OuterVolumeSpecName: "kube-api-access-mjv2w") pod "1b6ee71e-062d-49b9-b693-665355764e4f" (UID: "1b6ee71e-062d-49b9-b693-665355764e4f"). InnerVolumeSpecName "kube-api-access-mjv2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.930795 4748 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.931137 4748 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1b6ee71e-062d-49b9-b693-665355764e4f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:57 crc kubenswrapper[4748]: I0216 15:07:57.931217 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjv2w\" (UniqueName: \"kubernetes.io/projected/1b6ee71e-062d-49b9-b693-665355764e4f-kube-api-access-mjv2w\") on node \"crc\" DevicePath \"\"" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.228673 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4xpkc_1b6ee71e-062d-49b9-b693-665355764e4f/console/0.log" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.230150 4748 generic.go:334] "Generic (PLEG): container finished" podID="1b6ee71e-062d-49b9-b693-665355764e4f" containerID="21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0" exitCode=2 Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.230234 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xpkc" event={"ID":"1b6ee71e-062d-49b9-b693-665355764e4f","Type":"ContainerDied","Data":"21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0"} Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.230485 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4xpkc" event={"ID":"1b6ee71e-062d-49b9-b693-665355764e4f","Type":"ContainerDied","Data":"f2a560d2178312939e52ecdbc4ab8370450b9f7a34562f5cc33ccd7997b27629"} Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.230266 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4xpkc" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.230530 4748 scope.go:117] "RemoveContainer" containerID="21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.242098 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt"] Feb 16 15:07:58 crc kubenswrapper[4748]: E0216 15:07:58.242475 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b6ee71e-062d-49b9-b693-665355764e4f" containerName="console" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.242497 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b6ee71e-062d-49b9-b693-665355764e4f" containerName="console" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.242626 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b6ee71e-062d-49b9-b693-665355764e4f" containerName="console" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.243640 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.247497 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.259942 4748 scope.go:117] "RemoveContainer" containerID="21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0" Feb 16 15:07:58 crc kubenswrapper[4748]: E0216 15:07:58.260550 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0\": container with ID starting with 21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0 not found: ID does not exist" containerID="21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.260660 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0"} err="failed to get container status \"21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0\": rpc error: code = NotFound desc = could not find container \"21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0\": container with ID starting with 21ac81565222e0d3454789aa0fe2d21a83d4453a4a8a54bec76aede1d5de72a0 not found: ID does not exist" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.283886 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4xpkc"] Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.290966 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4xpkc"] Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.297626 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt"] Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.337391 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.337460 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gn65\" (UniqueName: \"kubernetes.io/projected/d15f1018-7687-4413-a41e-cca3126fa988-kube-api-access-7gn65\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.337780 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.439802 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gn65\" (UniqueName: \"kubernetes.io/projected/d15f1018-7687-4413-a41e-cca3126fa988-kube-api-access-7gn65\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.440031 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.440099 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.440710 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-bundle\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.440980 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-util\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.459784 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gn65\" (UniqueName: \"kubernetes.io/projected/d15f1018-7687-4413-a41e-cca3126fa988-kube-api-access-7gn65\") pod \"a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.568260 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:07:58 crc kubenswrapper[4748]: I0216 15:07:58.838903 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt"] Feb 16 15:07:59 crc kubenswrapper[4748]: I0216 15:07:59.004328 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b6ee71e-062d-49b9-b693-665355764e4f" path="/var/lib/kubelet/pods/1b6ee71e-062d-49b9-b693-665355764e4f/volumes" Feb 16 15:07:59 crc kubenswrapper[4748]: I0216 15:07:59.238680 4748 generic.go:334] "Generic (PLEG): container finished" podID="d15f1018-7687-4413-a41e-cca3126fa988" containerID="fe82a3f16a72f11624255e235e7c95676b8218a18fe762888e6b02210b96c340" exitCode=0 Feb 16 15:07:59 crc kubenswrapper[4748]: I0216 15:07:59.238784 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" event={"ID":"d15f1018-7687-4413-a41e-cca3126fa988","Type":"ContainerDied","Data":"fe82a3f16a72f11624255e235e7c95676b8218a18fe762888e6b02210b96c340"} Feb 16 15:07:59 crc kubenswrapper[4748]: I0216 15:07:59.238815 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" event={"ID":"d15f1018-7687-4413-a41e-cca3126fa988","Type":"ContainerStarted","Data":"075e17914fde70839b8504f4b814820924a254ad04138a478a47ed9115b57287"} Feb 16 15:08:01 crc kubenswrapper[4748]: I0216 15:08:01.262969 4748 generic.go:334] "Generic (PLEG): container finished" podID="d15f1018-7687-4413-a41e-cca3126fa988" containerID="22d5bf77830cb854f9274f87e61b54b70a4908dd0bebf05ec97092f0fdfe8c48" exitCode=0 Feb 16 15:08:01 crc kubenswrapper[4748]: I0216 15:08:01.263243 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" event={"ID":"d15f1018-7687-4413-a41e-cca3126fa988","Type":"ContainerDied","Data":"22d5bf77830cb854f9274f87e61b54b70a4908dd0bebf05ec97092f0fdfe8c48"} Feb 16 15:08:02 crc kubenswrapper[4748]: I0216 15:08:02.272010 4748 generic.go:334] "Generic (PLEG): container finished" podID="d15f1018-7687-4413-a41e-cca3126fa988" containerID="c63702df8929744e9178c3fd0d91829244d7ea028154637dd31903e31a6cdf88" exitCode=0 Feb 16 15:08:02 crc kubenswrapper[4748]: I0216 15:08:02.272069 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" event={"ID":"d15f1018-7687-4413-a41e-cca3126fa988","Type":"ContainerDied","Data":"c63702df8929744e9178c3fd0d91829244d7ea028154637dd31903e31a6cdf88"} Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.555273 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.631533 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gn65\" (UniqueName: \"kubernetes.io/projected/d15f1018-7687-4413-a41e-cca3126fa988-kube-api-access-7gn65\") pod \"d15f1018-7687-4413-a41e-cca3126fa988\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.631594 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-bundle\") pod \"d15f1018-7687-4413-a41e-cca3126fa988\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.631631 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-util\") pod \"d15f1018-7687-4413-a41e-cca3126fa988\" (UID: \"d15f1018-7687-4413-a41e-cca3126fa988\") " Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.633987 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-bundle" (OuterVolumeSpecName: "bundle") pod "d15f1018-7687-4413-a41e-cca3126fa988" (UID: "d15f1018-7687-4413-a41e-cca3126fa988"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.642186 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d15f1018-7687-4413-a41e-cca3126fa988-kube-api-access-7gn65" (OuterVolumeSpecName: "kube-api-access-7gn65") pod "d15f1018-7687-4413-a41e-cca3126fa988" (UID: "d15f1018-7687-4413-a41e-cca3126fa988"). InnerVolumeSpecName "kube-api-access-7gn65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.646036 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-util" (OuterVolumeSpecName: "util") pod "d15f1018-7687-4413-a41e-cca3126fa988" (UID: "d15f1018-7687-4413-a41e-cca3126fa988"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.733343 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gn65\" (UniqueName: \"kubernetes.io/projected/d15f1018-7687-4413-a41e-cca3126fa988-kube-api-access-7gn65\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.733413 4748 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:03 crc kubenswrapper[4748]: I0216 15:08:03.733430 4748 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d15f1018-7687-4413-a41e-cca3126fa988-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:08:04 crc kubenswrapper[4748]: I0216 15:08:04.288360 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" event={"ID":"d15f1018-7687-4413-a41e-cca3126fa988","Type":"ContainerDied","Data":"075e17914fde70839b8504f4b814820924a254ad04138a478a47ed9115b57287"} Feb 16 15:08:04 crc kubenswrapper[4748]: I0216 15:08:04.288399 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="075e17914fde70839b8504f4b814820924a254ad04138a478a47ed9115b57287" Feb 16 15:08:04 crc kubenswrapper[4748]: I0216 15:08:04.288473 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.074540 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2"] Feb 16 15:08:16 crc kubenswrapper[4748]: E0216 15:08:16.075378 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15f1018-7687-4413-a41e-cca3126fa988" containerName="pull" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.075392 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15f1018-7687-4413-a41e-cca3126fa988" containerName="pull" Feb 16 15:08:16 crc kubenswrapper[4748]: E0216 15:08:16.075417 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15f1018-7687-4413-a41e-cca3126fa988" containerName="util" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.075423 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15f1018-7687-4413-a41e-cca3126fa988" containerName="util" Feb 16 15:08:16 crc kubenswrapper[4748]: E0216 15:08:16.075433 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d15f1018-7687-4413-a41e-cca3126fa988" containerName="extract" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.075438 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d15f1018-7687-4413-a41e-cca3126fa988" containerName="extract" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.075550 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d15f1018-7687-4413-a41e-cca3126fa988" containerName="extract" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.076128 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.079832 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qsm4r" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.079859 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.079890 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.079914 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.083130 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.102831 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2"] Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.220701 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1e7c11c-e2d3-4941-b9ce-15b587b46798-webhook-cert\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.221112 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dptr2\" (UniqueName: \"kubernetes.io/projected/a1e7c11c-e2d3-4941-b9ce-15b587b46798-kube-api-access-dptr2\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.221252 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1e7c11c-e2d3-4941-b9ce-15b587b46798-apiservice-cert\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.322660 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1e7c11c-e2d3-4941-b9ce-15b587b46798-apiservice-cert\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.323011 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1e7c11c-e2d3-4941-b9ce-15b587b46798-webhook-cert\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.323149 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dptr2\" (UniqueName: \"kubernetes.io/projected/a1e7c11c-e2d3-4941-b9ce-15b587b46798-kube-api-access-dptr2\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.331520 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a1e7c11c-e2d3-4941-b9ce-15b587b46798-webhook-cert\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.341980 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a1e7c11c-e2d3-4941-b9ce-15b587b46798-apiservice-cert\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.353083 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dptr2\" (UniqueName: \"kubernetes.io/projected/a1e7c11c-e2d3-4941-b9ce-15b587b46798-kube-api-access-dptr2\") pod \"metallb-operator-controller-manager-5685848bcc-n64g2\" (UID: \"a1e7c11c-e2d3-4941-b9ce-15b587b46798\") " pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.377514 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb"] Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.378550 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.385521 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.385999 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.386138 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-wjptc" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.392386 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.400385 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb"] Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.528510 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3037912-4b1a-4bca-978a-eb9e28269c5e-apiservice-cert\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.529759 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3037912-4b1a-4bca-978a-eb9e28269c5e-webhook-cert\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.529786 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn4n9\" (UniqueName: \"kubernetes.io/projected/b3037912-4b1a-4bca-978a-eb9e28269c5e-kube-api-access-nn4n9\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.631152 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3037912-4b1a-4bca-978a-eb9e28269c5e-apiservice-cert\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.631208 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3037912-4b1a-4bca-978a-eb9e28269c5e-webhook-cert\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.631237 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn4n9\" (UniqueName: \"kubernetes.io/projected/b3037912-4b1a-4bca-978a-eb9e28269c5e-kube-api-access-nn4n9\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.651658 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b3037912-4b1a-4bca-978a-eb9e28269c5e-apiservice-cert\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.653862 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn4n9\" (UniqueName: \"kubernetes.io/projected/b3037912-4b1a-4bca-978a-eb9e28269c5e-kube-api-access-nn4n9\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.654539 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b3037912-4b1a-4bca-978a-eb9e28269c5e-webhook-cert\") pod \"metallb-operator-webhook-server-b4f87669f-wmdxb\" (UID: \"b3037912-4b1a-4bca-978a-eb9e28269c5e\") " pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.699353 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.858355 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2"] Feb 16 15:08:16 crc kubenswrapper[4748]: W0216 15:08:16.868671 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1e7c11c_e2d3_4941_b9ce_15b587b46798.slice/crio-e879b574444580080120d9cbbae7861b71d31a065e7037151380dfac535b2122 WatchSource:0}: Error finding container e879b574444580080120d9cbbae7861b71d31a065e7037151380dfac535b2122: Status 404 returned error can't find the container with id e879b574444580080120d9cbbae7861b71d31a065e7037151380dfac535b2122 Feb 16 15:08:16 crc kubenswrapper[4748]: I0216 15:08:16.918272 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb"] Feb 16 15:08:17 crc kubenswrapper[4748]: I0216 15:08:17.454112 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" event={"ID":"a1e7c11c-e2d3-4941-b9ce-15b587b46798","Type":"ContainerStarted","Data":"e879b574444580080120d9cbbae7861b71d31a065e7037151380dfac535b2122"} Feb 16 15:08:17 crc kubenswrapper[4748]: I0216 15:08:17.455378 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" event={"ID":"b3037912-4b1a-4bca-978a-eb9e28269c5e","Type":"ContainerStarted","Data":"26d4708b5b18dce8603f55ae8d40cadc7f67e13fa1d11efa0631917d9336b795"} Feb 16 15:08:22 crc kubenswrapper[4748]: I0216 15:08:22.509413 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" event={"ID":"b3037912-4b1a-4bca-978a-eb9e28269c5e","Type":"ContainerStarted","Data":"099d62b9956b141aabd8117861c06c9dc100b87e2f8b3d242d39e83c33a00af0"} Feb 16 15:08:22 crc kubenswrapper[4748]: I0216 15:08:22.510690 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:22 crc kubenswrapper[4748]: I0216 15:08:22.513463 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" event={"ID":"a1e7c11c-e2d3-4941-b9ce-15b587b46798","Type":"ContainerStarted","Data":"91bb18bda3eb0acec58446f6037b85388fd963561b728c9b356305394f96deb8"} Feb 16 15:08:22 crc kubenswrapper[4748]: I0216 15:08:22.513987 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:22 crc kubenswrapper[4748]: I0216 15:08:22.548118 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" podStartSLOduration=1.984679351 podStartE2EDuration="6.548087943s" podCreationTimestamp="2026-02-16 15:08:16 +0000 UTC" firstStartedPulling="2026-02-16 15:08:16.944526654 +0000 UTC m=+922.636195693" lastFinishedPulling="2026-02-16 15:08:21.507935246 +0000 UTC m=+927.199604285" observedRunningTime="2026-02-16 15:08:22.541194344 +0000 UTC m=+928.232863383" watchObservedRunningTime="2026-02-16 15:08:22.548087943 +0000 UTC m=+928.239756992" Feb 16 15:08:22 crc kubenswrapper[4748]: I0216 15:08:22.570191 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" podStartSLOduration=2.018250943 podStartE2EDuration="6.570169424s" podCreationTimestamp="2026-02-16 15:08:16 +0000 UTC" firstStartedPulling="2026-02-16 15:08:16.871568025 +0000 UTC m=+922.563237064" lastFinishedPulling="2026-02-16 15:08:21.423486506 +0000 UTC m=+927.115155545" observedRunningTime="2026-02-16 15:08:22.564640508 +0000 UTC m=+928.256309557" watchObservedRunningTime="2026-02-16 15:08:22.570169424 +0000 UTC m=+928.261838463" Feb 16 15:08:36 crc kubenswrapper[4748]: I0216 15:08:36.704875 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-b4f87669f-wmdxb" Feb 16 15:08:56 crc kubenswrapper[4748]: I0216 15:08:56.396763 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5685848bcc-n64g2" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.249868 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq"] Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.252444 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.261735 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.270248 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-hgfxv" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.303161 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-t6n42"] Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.320836 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.327434 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq"] Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.328905 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.336406 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.381197 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g7mpk"] Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.382602 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.391220 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.391704 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.391982 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-942s7" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.392903 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-69bbfbf88f-djpzh"] Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.393052 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.394419 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.396644 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.402770 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-djpzh"] Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.416675 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgf5\" (UniqueName: \"kubernetes.io/projected/d63d4158-b599-4058-a9c5-e31d1125c0bc-kube-api-access-8bgf5\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.417017 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.417123 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-metrics-certs\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.417295 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/645081e9-4d4b-4ec4-aca0-0d65484e18fc-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-vb4cq\" (UID: \"645081e9-4d4b-4ec4-aca0-0d65484e18fc\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.417381 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d63d4158-b599-4058-a9c5-e31d1125c0bc-metallb-excludel2\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.417461 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6qmz\" (UniqueName: \"kubernetes.io/projected/645081e9-4d4b-4ec4-aca0-0d65484e18fc-kube-api-access-d6qmz\") pod \"frr-k8s-webhook-server-78b44bf5bb-vb4cq\" (UID: \"645081e9-4d4b-4ec4-aca0-0d65484e18fc\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519394 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-startup\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519469 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bgf5\" (UniqueName: \"kubernetes.io/projected/d63d4158-b599-4058-a9c5-e31d1125c0bc-kube-api-access-8bgf5\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519542 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrz4q\" (UniqueName: \"kubernetes.io/projected/28714bd3-7a7d-449e-a2c7-281461dabdb7-kube-api-access-jrz4q\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519578 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-metrics-certs\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519612 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-cert\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519645 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-reloader\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519668 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-metrics-certs\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519737 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519775 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvkh\" (UniqueName: \"kubernetes.io/projected/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-kube-api-access-psvkh\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519810 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-metrics-certs\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519835 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-metrics\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519858 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-sockets\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519882 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/645081e9-4d4b-4ec4-aca0-0d65484e18fc-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-vb4cq\" (UID: \"645081e9-4d4b-4ec4-aca0-0d65484e18fc\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519918 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d63d4158-b599-4058-a9c5-e31d1125c0bc-metallb-excludel2\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519954 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6qmz\" (UniqueName: \"kubernetes.io/projected/645081e9-4d4b-4ec4-aca0-0d65484e18fc-kube-api-access-d6qmz\") pod \"frr-k8s-webhook-server-78b44bf5bb-vb4cq\" (UID: \"645081e9-4d4b-4ec4-aca0-0d65484e18fc\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.519992 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-conf\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: E0216 15:08:57.520045 4748 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:08:57 crc kubenswrapper[4748]: E0216 15:08:57.520109 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist podName:d63d4158-b599-4058-a9c5-e31d1125c0bc nodeName:}" failed. No retries permitted until 2026-02-16 15:08:58.020085033 +0000 UTC m=+963.711754072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist") pod "speaker-g7mpk" (UID: "d63d4158-b599-4058-a9c5-e31d1125c0bc") : secret "metallb-memberlist" not found Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.521232 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d63d4158-b599-4058-a9c5-e31d1125c0bc-metallb-excludel2\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.526334 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-metrics-certs\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.538929 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bgf5\" (UniqueName: \"kubernetes.io/projected/d63d4158-b599-4058-a9c5-e31d1125c0bc-kube-api-access-8bgf5\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.540254 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/645081e9-4d4b-4ec4-aca0-0d65484e18fc-cert\") pod \"frr-k8s-webhook-server-78b44bf5bb-vb4cq\" (UID: \"645081e9-4d4b-4ec4-aca0-0d65484e18fc\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.547588 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6qmz\" (UniqueName: \"kubernetes.io/projected/645081e9-4d4b-4ec4-aca0-0d65484e18fc-kube-api-access-d6qmz\") pod \"frr-k8s-webhook-server-78b44bf5bb-vb4cq\" (UID: \"645081e9-4d4b-4ec4-aca0-0d65484e18fc\") " pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.609950 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.620967 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-conf\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.621490 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-conf\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.621571 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-startup\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.621665 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrz4q\" (UniqueName: \"kubernetes.io/projected/28714bd3-7a7d-449e-a2c7-281461dabdb7-kube-api-access-jrz4q\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: E0216 15:08:57.622110 4748 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 16 15:08:57 crc kubenswrapper[4748]: E0216 15:08:57.622252 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-metrics-certs podName:28714bd3-7a7d-449e-a2c7-281461dabdb7 nodeName:}" failed. No retries permitted until 2026-02-16 15:08:58.122226867 +0000 UTC m=+963.813895906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-metrics-certs") pod "controller-69bbfbf88f-djpzh" (UID: "28714bd3-7a7d-449e-a2c7-281461dabdb7") : secret "controller-certs-secret" not found Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.622410 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-startup\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.622470 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-metrics-certs\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.622516 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-cert\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.622545 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-metrics-certs\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.622990 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-reloader\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.623233 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-reloader\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.623784 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvkh\" (UniqueName: \"kubernetes.io/projected/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-kube-api-access-psvkh\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.623838 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-metrics\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.624080 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-sockets\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.624221 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-metrics\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.624322 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-frr-sockets\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.624756 4748 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.629366 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-metrics-certs\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.636643 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-cert\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.640523 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvkh\" (UniqueName: \"kubernetes.io/projected/0248528a-4dfd-4dcd-ab5d-e99c2f989f81-kube-api-access-psvkh\") pod \"frr-k8s-t6n42\" (UID: \"0248528a-4dfd-4dcd-ab5d-e99c2f989f81\") " pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.644572 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrz4q\" (UniqueName: \"kubernetes.io/projected/28714bd3-7a7d-449e-a2c7-281461dabdb7-kube-api-access-jrz4q\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:57 crc kubenswrapper[4748]: I0216 15:08:57.675022 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-t6n42" Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.030695 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq"] Feb 16 15:08:58 crc kubenswrapper[4748]: W0216 15:08:58.034345 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod645081e9_4d4b_4ec4_aca0_0d65484e18fc.slice/crio-235d5c43a0c48cba29691bb7909d7c08c9814ae67b9c70dd6683c6c78976c165 WatchSource:0}: Error finding container 235d5c43a0c48cba29691bb7909d7c08c9814ae67b9c70dd6683c6c78976c165: Status 404 returned error can't find the container with id 235d5c43a0c48cba29691bb7909d7c08c9814ae67b9c70dd6683c6c78976c165 Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.037934 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:58 crc kubenswrapper[4748]: E0216 15:08:58.038181 4748 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 16 15:08:58 crc kubenswrapper[4748]: E0216 15:08:58.038290 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist podName:d63d4158-b599-4058-a9c5-e31d1125c0bc nodeName:}" failed. No retries permitted until 2026-02-16 15:08:59.038265856 +0000 UTC m=+964.729934895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist") pod "speaker-g7mpk" (UID: "d63d4158-b599-4058-a9c5-e31d1125c0bc") : secret "metallb-memberlist" not found Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.139237 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-metrics-certs\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.147468 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28714bd3-7a7d-449e-a2c7-281461dabdb7-metrics-certs\") pod \"controller-69bbfbf88f-djpzh\" (UID: \"28714bd3-7a7d-449e-a2c7-281461dabdb7\") " pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.335090 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.601545 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-69bbfbf88f-djpzh"] Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.814462 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" event={"ID":"645081e9-4d4b-4ec4-aca0-0d65484e18fc","Type":"ContainerStarted","Data":"235d5c43a0c48cba29691bb7909d7c08c9814ae67b9c70dd6683c6c78976c165"} Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.817229 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-djpzh" event={"ID":"28714bd3-7a7d-449e-a2c7-281461dabdb7","Type":"ContainerStarted","Data":"661df4f3e838e5f02a7177c96c3ffc28d69cba9e5f39a0ff2dec03df53384782"} Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.817273 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-djpzh" event={"ID":"28714bd3-7a7d-449e-a2c7-281461dabdb7","Type":"ContainerStarted","Data":"da603e532d6f27279541b7997c7b4fb65526454e098cd05a1d17d95807e0fa65"} Feb 16 15:08:58 crc kubenswrapper[4748]: I0216 15:08:58.818740 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerStarted","Data":"78b4f9be55ec985aa4463f322808319165833f99cdfbde43ea4ff3368ee0dbcc"} Feb 16 15:08:59 crc kubenswrapper[4748]: I0216 15:08:59.055173 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:59 crc kubenswrapper[4748]: I0216 15:08:59.063202 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d63d4158-b599-4058-a9c5-e31d1125c0bc-memberlist\") pod \"speaker-g7mpk\" (UID: \"d63d4158-b599-4058-a9c5-e31d1125c0bc\") " pod="metallb-system/speaker-g7mpk" Feb 16 15:08:59 crc kubenswrapper[4748]: I0216 15:08:59.211840 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g7mpk" Feb 16 15:08:59 crc kubenswrapper[4748]: I0216 15:08:59.840170 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-69bbfbf88f-djpzh" event={"ID":"28714bd3-7a7d-449e-a2c7-281461dabdb7","Type":"ContainerStarted","Data":"4865c3dfb2b9d6a6fa28181c51ae2f91cbfe8c212e559c0fc34eca1431d8a515"} Feb 16 15:08:59 crc kubenswrapper[4748]: I0216 15:08:59.840632 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:08:59 crc kubenswrapper[4748]: I0216 15:08:59.843999 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g7mpk" event={"ID":"d63d4158-b599-4058-a9c5-e31d1125c0bc","Type":"ContainerStarted","Data":"0f8d6a5ec76d1c216b27d046f53d70c711b52864f66c518883a563d832110eff"} Feb 16 15:08:59 crc kubenswrapper[4748]: I0216 15:08:59.844059 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g7mpk" event={"ID":"d63d4158-b599-4058-a9c5-e31d1125c0bc","Type":"ContainerStarted","Data":"2f41ae7f252c6a0c41c15acbf245651d6c03407bb91786f48ee0dbf860117501"} Feb 16 15:09:00 crc kubenswrapper[4748]: I0216 15:09:00.853804 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g7mpk" event={"ID":"d63d4158-b599-4058-a9c5-e31d1125c0bc","Type":"ContainerStarted","Data":"41ff61040a99b68e9f314f61d59d436986a075fea10b7daecea4f546a9fa4c65"} Feb 16 15:09:00 crc kubenswrapper[4748]: I0216 15:09:00.854312 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g7mpk" Feb 16 15:09:00 crc kubenswrapper[4748]: I0216 15:09:00.877927 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g7mpk" podStartSLOduration=3.877906603 podStartE2EDuration="3.877906603s" podCreationTimestamp="2026-02-16 15:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:09:00.877191675 +0000 UTC m=+966.568860714" watchObservedRunningTime="2026-02-16 15:09:00.877906603 +0000 UTC m=+966.569575642" Feb 16 15:09:00 crc kubenswrapper[4748]: I0216 15:09:00.879209 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-69bbfbf88f-djpzh" podStartSLOduration=3.879204625 podStartE2EDuration="3.879204625s" podCreationTimestamp="2026-02-16 15:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:08:59.862691027 +0000 UTC m=+965.554360056" watchObservedRunningTime="2026-02-16 15:09:00.879204625 +0000 UTC m=+966.570873664" Feb 16 15:09:04 crc kubenswrapper[4748]: I0216 15:09:04.729467 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:09:04 crc kubenswrapper[4748]: I0216 15:09:04.730079 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:09:05 crc kubenswrapper[4748]: I0216 15:09:05.906087 4748 generic.go:334] "Generic (PLEG): container finished" podID="0248528a-4dfd-4dcd-ab5d-e99c2f989f81" containerID="536b2ab37bb619299cd35d6edc0f82eec8f506e0d9ab39ae29a00b828a18483a" exitCode=0 Feb 16 15:09:05 crc kubenswrapper[4748]: I0216 15:09:05.906183 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerDied","Data":"536b2ab37bb619299cd35d6edc0f82eec8f506e0d9ab39ae29a00b828a18483a"} Feb 16 15:09:05 crc kubenswrapper[4748]: I0216 15:09:05.908597 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" event={"ID":"645081e9-4d4b-4ec4-aca0-0d65484e18fc","Type":"ContainerStarted","Data":"27136246a3e93529aa46779086485da2609e9d4eb0fd8566e82a2ef8eda00f30"} Feb 16 15:09:05 crc kubenswrapper[4748]: I0216 15:09:05.909090 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:09:05 crc kubenswrapper[4748]: I0216 15:09:05.972081 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" podStartSLOduration=1.431221048 podStartE2EDuration="8.972058215s" podCreationTimestamp="2026-02-16 15:08:57 +0000 UTC" firstStartedPulling="2026-02-16 15:08:58.038542212 +0000 UTC m=+963.730211251" lastFinishedPulling="2026-02-16 15:09:05.579379379 +0000 UTC m=+971.271048418" observedRunningTime="2026-02-16 15:09:05.966509838 +0000 UTC m=+971.658178867" watchObservedRunningTime="2026-02-16 15:09:05.972058215 +0000 UTC m=+971.663727254" Feb 16 15:09:06 crc kubenswrapper[4748]: I0216 15:09:06.919742 4748 generic.go:334] "Generic (PLEG): container finished" podID="0248528a-4dfd-4dcd-ab5d-e99c2f989f81" containerID="ae71dd2d0244bac3409f65d0d09ca7c900565bf78b567f4745332c13daa62e70" exitCode=0 Feb 16 15:09:06 crc kubenswrapper[4748]: I0216 15:09:06.919838 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerDied","Data":"ae71dd2d0244bac3409f65d0d09ca7c900565bf78b567f4745332c13daa62e70"} Feb 16 15:09:07 crc kubenswrapper[4748]: I0216 15:09:07.929175 4748 generic.go:334] "Generic (PLEG): container finished" podID="0248528a-4dfd-4dcd-ab5d-e99c2f989f81" containerID="0774a907782e3492c46c1aaf42d8c261662fbd452ed06f00416dc725f88f0a85" exitCode=0 Feb 16 15:09:07 crc kubenswrapper[4748]: I0216 15:09:07.929276 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerDied","Data":"0774a907782e3492c46c1aaf42d8c261662fbd452ed06f00416dc725f88f0a85"} Feb 16 15:09:08 crc kubenswrapper[4748]: I0216 15:09:08.342274 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-69bbfbf88f-djpzh" Feb 16 15:09:08 crc kubenswrapper[4748]: I0216 15:09:08.942897 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerStarted","Data":"3b88b0a48356b51e15254b16540c1dd0c380df055ad66e6c2cdd3da78da3e523"} Feb 16 15:09:08 crc kubenswrapper[4748]: I0216 15:09:08.943364 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerStarted","Data":"33c3e30f6d654969b016442b03c15b9016626f610591a87dd1bbc2cd96113521"} Feb 16 15:09:08 crc kubenswrapper[4748]: I0216 15:09:08.943381 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerStarted","Data":"1aba12deffd95e55541e05a4f1798e22b4489de55bc5d2bf99117833fa681301"} Feb 16 15:09:08 crc kubenswrapper[4748]: I0216 15:09:08.943394 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerStarted","Data":"676382baabaaead26e61c6b0ffa9c2d756f8b5ff0207c6ef70f2e24fcc7692cf"} Feb 16 15:09:08 crc kubenswrapper[4748]: I0216 15:09:08.943406 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerStarted","Data":"89eeb75decde3d086f6dda068853189fe64240b4eaea781366b9ff7034f41f22"} Feb 16 15:09:09 crc kubenswrapper[4748]: I0216 15:09:09.216048 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g7mpk" Feb 16 15:09:09 crc kubenswrapper[4748]: I0216 15:09:09.961100 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-t6n42" event={"ID":"0248528a-4dfd-4dcd-ab5d-e99c2f989f81","Type":"ContainerStarted","Data":"867b92ef586118df15b7ad01e6299ef7edff4e85ce06d4dcaf2bb3cc7dc33ed6"} Feb 16 15:09:09 crc kubenswrapper[4748]: I0216 15:09:09.961566 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-t6n42" Feb 16 15:09:09 crc kubenswrapper[4748]: I0216 15:09:09.998614 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-t6n42" podStartSLOduration=5.267309665 podStartE2EDuration="12.998580079s" podCreationTimestamp="2026-02-16 15:08:57 +0000 UTC" firstStartedPulling="2026-02-16 15:08:57.826962606 +0000 UTC m=+963.518631645" lastFinishedPulling="2026-02-16 15:09:05.55823302 +0000 UTC m=+971.249902059" observedRunningTime="2026-02-16 15:09:09.991488955 +0000 UTC m=+975.683157994" watchObservedRunningTime="2026-02-16 15:09:09.998580079 +0000 UTC m=+975.690249118" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.240531 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-t6cqq"] Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.242236 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t6cqq" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.248080 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-qnh56" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.248228 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.248771 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.266353 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t6cqq"] Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.315315 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gq4lk\" (UniqueName: \"kubernetes.io/projected/16d51e2d-af1b-4752-a9b0-8bbb8f869c16-kube-api-access-gq4lk\") pod \"openstack-operator-index-t6cqq\" (UID: \"16d51e2d-af1b-4752-a9b0-8bbb8f869c16\") " pod="openstack-operators/openstack-operator-index-t6cqq" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.416360 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gq4lk\" (UniqueName: \"kubernetes.io/projected/16d51e2d-af1b-4752-a9b0-8bbb8f869c16-kube-api-access-gq4lk\") pod \"openstack-operator-index-t6cqq\" (UID: \"16d51e2d-af1b-4752-a9b0-8bbb8f869c16\") " pod="openstack-operators/openstack-operator-index-t6cqq" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.441699 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gq4lk\" (UniqueName: \"kubernetes.io/projected/16d51e2d-af1b-4752-a9b0-8bbb8f869c16-kube-api-access-gq4lk\") pod \"openstack-operator-index-t6cqq\" (UID: \"16d51e2d-af1b-4752-a9b0-8bbb8f869c16\") " pod="openstack-operators/openstack-operator-index-t6cqq" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.561469 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t6cqq" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.677056 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-t6n42" Feb 16 15:09:12 crc kubenswrapper[4748]: I0216 15:09:12.724947 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-t6n42" Feb 16 15:09:13 crc kubenswrapper[4748]: I0216 15:09:13.039126 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-t6cqq"] Feb 16 15:09:13 crc kubenswrapper[4748]: W0216 15:09:13.053934 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16d51e2d_af1b_4752_a9b0_8bbb8f869c16.slice/crio-9602617902f5c24ac16484cdedead9fa6cf7c667784c4772cdcd9fad7bf156d1 WatchSource:0}: Error finding container 9602617902f5c24ac16484cdedead9fa6cf7c667784c4772cdcd9fad7bf156d1: Status 404 returned error can't find the container with id 9602617902f5c24ac16484cdedead9fa6cf7c667784c4772cdcd9fad7bf156d1 Feb 16 15:09:13 crc kubenswrapper[4748]: I0216 15:09:13.999462 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t6cqq" event={"ID":"16d51e2d-af1b-4752-a9b0-8bbb8f869c16","Type":"ContainerStarted","Data":"9602617902f5c24ac16484cdedead9fa6cf7c667784c4772cdcd9fad7bf156d1"} Feb 16 15:09:15 crc kubenswrapper[4748]: I0216 15:09:15.219252 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-t6cqq"] Feb 16 15:09:15 crc kubenswrapper[4748]: I0216 15:09:15.832631 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-msk6t"] Feb 16 15:09:15 crc kubenswrapper[4748]: I0216 15:09:15.835035 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:15 crc kubenswrapper[4748]: I0216 15:09:15.848063 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-msk6t"] Feb 16 15:09:15 crc kubenswrapper[4748]: I0216 15:09:15.975680 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm7bn\" (UniqueName: \"kubernetes.io/projected/391e6019-7bc8-4e9d-bfff-c5be8b646c53-kube-api-access-bm7bn\") pod \"openstack-operator-index-msk6t\" (UID: \"391e6019-7bc8-4e9d-bfff-c5be8b646c53\") " pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.032698 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t6cqq" event={"ID":"16d51e2d-af1b-4752-a9b0-8bbb8f869c16","Type":"ContainerStarted","Data":"e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab"} Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.032980 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-t6cqq" podUID="16d51e2d-af1b-4752-a9b0-8bbb8f869c16" containerName="registry-server" containerID="cri-o://e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab" gracePeriod=2 Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.067925 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-t6cqq" podStartSLOduration=1.699594824 podStartE2EDuration="4.067887555s" podCreationTimestamp="2026-02-16 15:09:12 +0000 UTC" firstStartedPulling="2026-02-16 15:09:13.056581261 +0000 UTC m=+978.748250300" lastFinishedPulling="2026-02-16 15:09:15.424874002 +0000 UTC m=+981.116543031" observedRunningTime="2026-02-16 15:09:16.060625176 +0000 UTC m=+981.752294245" watchObservedRunningTime="2026-02-16 15:09:16.067887555 +0000 UTC m=+981.759556604" Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.077959 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm7bn\" (UniqueName: \"kubernetes.io/projected/391e6019-7bc8-4e9d-bfff-c5be8b646c53-kube-api-access-bm7bn\") pod \"openstack-operator-index-msk6t\" (UID: \"391e6019-7bc8-4e9d-bfff-c5be8b646c53\") " pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.115118 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm7bn\" (UniqueName: \"kubernetes.io/projected/391e6019-7bc8-4e9d-bfff-c5be8b646c53-kube-api-access-bm7bn\") pod \"openstack-operator-index-msk6t\" (UID: \"391e6019-7bc8-4e9d-bfff-c5be8b646c53\") " pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.169554 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.448859 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-msk6t"] Feb 16 15:09:16 crc kubenswrapper[4748]: W0216 15:09:16.450397 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod391e6019_7bc8_4e9d_bfff_c5be8b646c53.slice/crio-073904ec7cadc183470a01a56cdb1efb27418a44f5dbb02ce70ee1f55b20932c WatchSource:0}: Error finding container 073904ec7cadc183470a01a56cdb1efb27418a44f5dbb02ce70ee1f55b20932c: Status 404 returned error can't find the container with id 073904ec7cadc183470a01a56cdb1efb27418a44f5dbb02ce70ee1f55b20932c Feb 16 15:09:16 crc kubenswrapper[4748]: I0216 15:09:16.994285 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t6cqq" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.043529 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msk6t" event={"ID":"391e6019-7bc8-4e9d-bfff-c5be8b646c53","Type":"ContainerStarted","Data":"219cb76cd709c8234de2e8fad1441dfcdbbb96bcd3a5926e3b251adb9a63aac0"} Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.043593 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-msk6t" event={"ID":"391e6019-7bc8-4e9d-bfff-c5be8b646c53","Type":"ContainerStarted","Data":"073904ec7cadc183470a01a56cdb1efb27418a44f5dbb02ce70ee1f55b20932c"} Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.047514 4748 generic.go:334] "Generic (PLEG): container finished" podID="16d51e2d-af1b-4752-a9b0-8bbb8f869c16" containerID="e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab" exitCode=0 Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.047588 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t6cqq" event={"ID":"16d51e2d-af1b-4752-a9b0-8bbb8f869c16","Type":"ContainerDied","Data":"e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab"} Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.047631 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-t6cqq" event={"ID":"16d51e2d-af1b-4752-a9b0-8bbb8f869c16","Type":"ContainerDied","Data":"9602617902f5c24ac16484cdedead9fa6cf7c667784c4772cdcd9fad7bf156d1"} Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.047660 4748 scope.go:117] "RemoveContainer" containerID="e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.047878 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-t6cqq" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.070459 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-msk6t" podStartSLOduration=1.852455776 podStartE2EDuration="2.07043111s" podCreationTimestamp="2026-02-16 15:09:15 +0000 UTC" firstStartedPulling="2026-02-16 15:09:16.458272843 +0000 UTC m=+982.149941902" lastFinishedPulling="2026-02-16 15:09:16.676248197 +0000 UTC m=+982.367917236" observedRunningTime="2026-02-16 15:09:17.068551193 +0000 UTC m=+982.760220232" watchObservedRunningTime="2026-02-16 15:09:17.07043111 +0000 UTC m=+982.762100149" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.072050 4748 scope.go:117] "RemoveContainer" containerID="e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab" Feb 16 15:09:17 crc kubenswrapper[4748]: E0216 15:09:17.075406 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab\": container with ID starting with e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab not found: ID does not exist" containerID="e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.075505 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab"} err="failed to get container status \"e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab\": rpc error: code = NotFound desc = could not find container \"e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab\": container with ID starting with e10eb2bdbc10f2ecc6e690eaaad7cb8df3334138c7345c140c2cf5fd84c657ab not found: ID does not exist" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.094492 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gq4lk\" (UniqueName: \"kubernetes.io/projected/16d51e2d-af1b-4752-a9b0-8bbb8f869c16-kube-api-access-gq4lk\") pod \"16d51e2d-af1b-4752-a9b0-8bbb8f869c16\" (UID: \"16d51e2d-af1b-4752-a9b0-8bbb8f869c16\") " Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.105553 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16d51e2d-af1b-4752-a9b0-8bbb8f869c16-kube-api-access-gq4lk" (OuterVolumeSpecName: "kube-api-access-gq4lk") pod "16d51e2d-af1b-4752-a9b0-8bbb8f869c16" (UID: "16d51e2d-af1b-4752-a9b0-8bbb8f869c16"). InnerVolumeSpecName "kube-api-access-gq4lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.196915 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gq4lk\" (UniqueName: \"kubernetes.io/projected/16d51e2d-af1b-4752-a9b0-8bbb8f869c16-kube-api-access-gq4lk\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.405199 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-t6cqq"] Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.415783 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-t6cqq"] Feb 16 15:09:17 crc kubenswrapper[4748]: I0216 15:09:17.622054 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-78b44bf5bb-vb4cq" Feb 16 15:09:19 crc kubenswrapper[4748]: I0216 15:09:19.010460 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16d51e2d-af1b-4752-a9b0-8bbb8f869c16" path="/var/lib/kubelet/pods/16d51e2d-af1b-4752-a9b0-8bbb8f869c16/volumes" Feb 16 15:09:26 crc kubenswrapper[4748]: I0216 15:09:26.170686 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:26 crc kubenswrapper[4748]: I0216 15:09:26.171550 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:26 crc kubenswrapper[4748]: I0216 15:09:26.217926 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:27 crc kubenswrapper[4748]: I0216 15:09:27.191096 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-msk6t" Feb 16 15:09:27 crc kubenswrapper[4748]: I0216 15:09:27.678606 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-t6n42" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.089900 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9"] Feb 16 15:09:28 crc kubenswrapper[4748]: E0216 15:09:28.090735 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16d51e2d-af1b-4752-a9b0-8bbb8f869c16" containerName="registry-server" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.090766 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="16d51e2d-af1b-4752-a9b0-8bbb8f869c16" containerName="registry-server" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.091042 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="16d51e2d-af1b-4752-a9b0-8bbb8f869c16" containerName="registry-server" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.092803 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.097827 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-zljld" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.102497 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9"] Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.284502 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqh9p\" (UniqueName: \"kubernetes.io/projected/2927c174-6e02-4529-802c-3bf02d82855f-kube-api-access-lqh9p\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.284580 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.284969 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.387293 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqh9p\" (UniqueName: \"kubernetes.io/projected/2927c174-6e02-4529-802c-3bf02d82855f-kube-api-access-lqh9p\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.387390 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.387499 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.388470 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-util\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.388548 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-bundle\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.411190 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqh9p\" (UniqueName: \"kubernetes.io/projected/2927c174-6e02-4529-802c-3bf02d82855f-kube-api-access-lqh9p\") pod \"02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.423606 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:28 crc kubenswrapper[4748]: I0216 15:09:28.892471 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9"] Feb 16 15:09:28 crc kubenswrapper[4748]: W0216 15:09:28.900937 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2927c174_6e02_4529_802c_3bf02d82855f.slice/crio-84a93441552b7a2391a687cb6c567e1a20d1dc86023f22efcc6faa40f9045f57 WatchSource:0}: Error finding container 84a93441552b7a2391a687cb6c567e1a20d1dc86023f22efcc6faa40f9045f57: Status 404 returned error can't find the container with id 84a93441552b7a2391a687cb6c567e1a20d1dc86023f22efcc6faa40f9045f57 Feb 16 15:09:29 crc kubenswrapper[4748]: I0216 15:09:29.167851 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" event={"ID":"2927c174-6e02-4529-802c-3bf02d82855f","Type":"ContainerStarted","Data":"834b608474fd2026e8f838e1e45f82305d41b63bfe9141821d548449dd960eb5"} Feb 16 15:09:29 crc kubenswrapper[4748]: I0216 15:09:29.168180 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" event={"ID":"2927c174-6e02-4529-802c-3bf02d82855f","Type":"ContainerStarted","Data":"84a93441552b7a2391a687cb6c567e1a20d1dc86023f22efcc6faa40f9045f57"} Feb 16 15:09:29 crc kubenswrapper[4748]: I0216 15:09:29.169723 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:09:30 crc kubenswrapper[4748]: I0216 15:09:30.186291 4748 generic.go:334] "Generic (PLEG): container finished" podID="2927c174-6e02-4529-802c-3bf02d82855f" containerID="834b608474fd2026e8f838e1e45f82305d41b63bfe9141821d548449dd960eb5" exitCode=0 Feb 16 15:09:30 crc kubenswrapper[4748]: I0216 15:09:30.186665 4748 generic.go:334] "Generic (PLEG): container finished" podID="2927c174-6e02-4529-802c-3bf02d82855f" containerID="762bff475e1ab3dcfbbc0067868cfc4fd003339b4a544d9694d9ca872182043f" exitCode=0 Feb 16 15:09:30 crc kubenswrapper[4748]: I0216 15:09:30.186397 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" event={"ID":"2927c174-6e02-4529-802c-3bf02d82855f","Type":"ContainerDied","Data":"834b608474fd2026e8f838e1e45f82305d41b63bfe9141821d548449dd960eb5"} Feb 16 15:09:30 crc kubenswrapper[4748]: I0216 15:09:30.186751 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" event={"ID":"2927c174-6e02-4529-802c-3bf02d82855f","Type":"ContainerDied","Data":"762bff475e1ab3dcfbbc0067868cfc4fd003339b4a544d9694d9ca872182043f"} Feb 16 15:09:31 crc kubenswrapper[4748]: I0216 15:09:31.200591 4748 generic.go:334] "Generic (PLEG): container finished" podID="2927c174-6e02-4529-802c-3bf02d82855f" containerID="a13502979d47d26be6caec31521c2390da8b16996cdc06362aa27d56027f19ed" exitCode=0 Feb 16 15:09:31 crc kubenswrapper[4748]: I0216 15:09:31.200681 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" event={"ID":"2927c174-6e02-4529-802c-3bf02d82855f","Type":"ContainerDied","Data":"a13502979d47d26be6caec31521c2390da8b16996cdc06362aa27d56027f19ed"} Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.552459 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.659270 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-util\") pod \"2927c174-6e02-4529-802c-3bf02d82855f\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.659388 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqh9p\" (UniqueName: \"kubernetes.io/projected/2927c174-6e02-4529-802c-3bf02d82855f-kube-api-access-lqh9p\") pod \"2927c174-6e02-4529-802c-3bf02d82855f\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.659446 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-bundle\") pod \"2927c174-6e02-4529-802c-3bf02d82855f\" (UID: \"2927c174-6e02-4529-802c-3bf02d82855f\") " Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.660370 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-bundle" (OuterVolumeSpecName: "bundle") pod "2927c174-6e02-4529-802c-3bf02d82855f" (UID: "2927c174-6e02-4529-802c-3bf02d82855f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.666685 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2927c174-6e02-4529-802c-3bf02d82855f-kube-api-access-lqh9p" (OuterVolumeSpecName: "kube-api-access-lqh9p") pod "2927c174-6e02-4529-802c-3bf02d82855f" (UID: "2927c174-6e02-4529-802c-3bf02d82855f"). InnerVolumeSpecName "kube-api-access-lqh9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.677936 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-util" (OuterVolumeSpecName: "util") pod "2927c174-6e02-4529-802c-3bf02d82855f" (UID: "2927c174-6e02-4529-802c-3bf02d82855f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.761670 4748 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-util\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.761766 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqh9p\" (UniqueName: \"kubernetes.io/projected/2927c174-6e02-4529-802c-3bf02d82855f-kube-api-access-lqh9p\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:32 crc kubenswrapper[4748]: I0216 15:09:32.761796 4748 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2927c174-6e02-4529-802c-3bf02d82855f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:09:33 crc kubenswrapper[4748]: I0216 15:09:33.220953 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" event={"ID":"2927c174-6e02-4529-802c-3bf02d82855f","Type":"ContainerDied","Data":"84a93441552b7a2391a687cb6c567e1a20d1dc86023f22efcc6faa40f9045f57"} Feb 16 15:09:33 crc kubenswrapper[4748]: I0216 15:09:33.221028 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84a93441552b7a2391a687cb6c567e1a20d1dc86023f22efcc6faa40f9045f57" Feb 16 15:09:33 crc kubenswrapper[4748]: I0216 15:09:33.221067 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9" Feb 16 15:09:34 crc kubenswrapper[4748]: I0216 15:09:34.729282 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:09:34 crc kubenswrapper[4748]: I0216 15:09:34.729665 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.149871 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx"] Feb 16 15:09:40 crc kubenswrapper[4748]: E0216 15:09:40.150703 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2927c174-6e02-4529-802c-3bf02d82855f" containerName="extract" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.150807 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2927c174-6e02-4529-802c-3bf02d82855f" containerName="extract" Feb 16 15:09:40 crc kubenswrapper[4748]: E0216 15:09:40.150819 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2927c174-6e02-4529-802c-3bf02d82855f" containerName="util" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.150826 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2927c174-6e02-4529-802c-3bf02d82855f" containerName="util" Feb 16 15:09:40 crc kubenswrapper[4748]: E0216 15:09:40.150845 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2927c174-6e02-4529-802c-3bf02d82855f" containerName="pull" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.150851 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2927c174-6e02-4529-802c-3bf02d82855f" containerName="pull" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.150969 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2927c174-6e02-4529-802c-3bf02d82855f" containerName="extract" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.151447 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.153448 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-nds8n" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.178317 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx"] Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.274314 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw8qb\" (UniqueName: \"kubernetes.io/projected/60c7e70b-728c-4a23-9bdf-801548ee7c98-kube-api-access-tw8qb\") pod \"openstack-operator-controller-init-787c798d66-8b6lx\" (UID: \"60c7e70b-728c-4a23-9bdf-801548ee7c98\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.376073 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw8qb\" (UniqueName: \"kubernetes.io/projected/60c7e70b-728c-4a23-9bdf-801548ee7c98-kube-api-access-tw8qb\") pod \"openstack-operator-controller-init-787c798d66-8b6lx\" (UID: \"60c7e70b-728c-4a23-9bdf-801548ee7c98\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.396760 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw8qb\" (UniqueName: \"kubernetes.io/projected/60c7e70b-728c-4a23-9bdf-801548ee7c98-kube-api-access-tw8qb\") pod \"openstack-operator-controller-init-787c798d66-8b6lx\" (UID: \"60c7e70b-728c-4a23-9bdf-801548ee7c98\") " pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.470034 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" Feb 16 15:09:40 crc kubenswrapper[4748]: I0216 15:09:40.710503 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx"] Feb 16 15:09:41 crc kubenswrapper[4748]: I0216 15:09:41.282879 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" event={"ID":"60c7e70b-728c-4a23-9bdf-801548ee7c98","Type":"ContainerStarted","Data":"36926a8e94358585f4e6a900c2f12812be959b1ecfba5412394deaa64bc518ee"} Feb 16 15:09:45 crc kubenswrapper[4748]: I0216 15:09:45.311377 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" event={"ID":"60c7e70b-728c-4a23-9bdf-801548ee7c98","Type":"ContainerStarted","Data":"e45fb7d1e82d5138480155d41ea9247aeae4be12945490bc90e2fdb4228a889a"} Feb 16 15:09:45 crc kubenswrapper[4748]: I0216 15:09:45.312338 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" Feb 16 15:09:45 crc kubenswrapper[4748]: I0216 15:09:45.339547 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" podStartSLOduration=1.590123326 podStartE2EDuration="5.33952765s" podCreationTimestamp="2026-02-16 15:09:40 +0000 UTC" firstStartedPulling="2026-02-16 15:09:40.732378399 +0000 UTC m=+1006.424047438" lastFinishedPulling="2026-02-16 15:09:44.481782723 +0000 UTC m=+1010.173451762" observedRunningTime="2026-02-16 15:09:45.333904852 +0000 UTC m=+1011.025573901" watchObservedRunningTime="2026-02-16 15:09:45.33952765 +0000 UTC m=+1011.031196689" Feb 16 15:09:50 crc kubenswrapper[4748]: I0216 15:09:50.474225 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-787c798d66-8b6lx" Feb 16 15:10:04 crc kubenswrapper[4748]: I0216 15:10:04.729834 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:10:04 crc kubenswrapper[4748]: I0216 15:10:04.730452 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:10:04 crc kubenswrapper[4748]: I0216 15:10:04.730507 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:10:04 crc kubenswrapper[4748]: I0216 15:10:04.731160 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"67a90ffaffa57c523d924f86672f533c001cdd9525b4878908f13274a4bee682"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:10:04 crc kubenswrapper[4748]: I0216 15:10:04.731214 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://67a90ffaffa57c523d924f86672f533c001cdd9525b4878908f13274a4bee682" gracePeriod=600 Feb 16 15:10:05 crc kubenswrapper[4748]: I0216 15:10:05.459232 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="67a90ffaffa57c523d924f86672f533c001cdd9525b4878908f13274a4bee682" exitCode=0 Feb 16 15:10:05 crc kubenswrapper[4748]: I0216 15:10:05.459306 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"67a90ffaffa57c523d924f86672f533c001cdd9525b4878908f13274a4bee682"} Feb 16 15:10:05 crc kubenswrapper[4748]: I0216 15:10:05.459600 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"4af7b39f84ae1089f7c8b9340185a28b394a9429bf33b6cefed9e396e13808b9"} Feb 16 15:10:05 crc kubenswrapper[4748]: I0216 15:10:05.459618 4748 scope.go:117] "RemoveContainer" containerID="4e49a4e16e48e1a2e71e7ad48688259341e07a21e8d2998708ad1242f0a4ff61" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.895993 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz"] Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.897377 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.899282 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-8c2cn" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.910298 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs"] Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.911218 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.913194 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-r8jjt" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.915432 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg"] Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.916133 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.921893 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz"] Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.922878 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-4mqw8" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.928145 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg"] Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.942772 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs"] Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.954209 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9"] Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.955769 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.960814 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4bblr" Feb 16 15:10:13 crc kubenswrapper[4748]: I0216 15:10:13.973233 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.000494 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.001626 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.005657 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-lnkfd" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.012213 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.013205 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.015748 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-7vdpv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.055779 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.058512 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.059413 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.062422 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-hlsxv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.072764 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-szbcv"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.073953 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.076520 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.076732 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-s2h7k" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085553 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv8m9\" (UniqueName: \"kubernetes.io/projected/6b8c8de2-3f25-4adb-9598-3beceb5aab8f-kube-api-access-pv8m9\") pod \"ironic-operator-controller-manager-554564d7fc-jvjnc\" (UID: \"6b8c8de2-3f25-4adb-9598-3beceb5aab8f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085603 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsrlr\" (UniqueName: \"kubernetes.io/projected/47a580d2-e511-4827-bc01-91189c1e34e9-kube-api-access-gsrlr\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085653 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq79z\" (UniqueName: \"kubernetes.io/projected/c8282c68-cc06-4252-be3f-12fd375413d5-kube-api-access-lq79z\") pod \"cinder-operator-controller-manager-5d946d989d-rmvdg\" (UID: \"c8282c68-cc06-4252-be3f-12fd375413d5\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085675 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xtv8\" (UniqueName: \"kubernetes.io/projected/d7e9b369-11d8-4aa9-a3b2-db6b88904b51-kube-api-access-8xtv8\") pod \"designate-operator-controller-manager-6d8bf5c495-64bzs\" (UID: \"d7e9b369-11d8-4aa9-a3b2-db6b88904b51\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085707 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58qhf\" (UniqueName: \"kubernetes.io/projected/8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325-kube-api-access-58qhf\") pod \"heat-operator-controller-manager-69f49c598c-xf72s\" (UID: \"8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085754 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmvn\" (UniqueName: \"kubernetes.io/projected/6c462cae-e6f6-4551-a63f-783b5355050d-kube-api-access-xdmvn\") pod \"horizon-operator-controller-manager-5b9b8895d5-2lgdc\" (UID: \"6c462cae-e6f6-4551-a63f-783b5355050d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085796 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7scm\" (UniqueName: \"kubernetes.io/projected/69a86f03-7f6c-48b7-bc6f-c6c432f735ce-kube-api-access-b7scm\") pod \"glance-operator-controller-manager-77987464f4-7vgh9\" (UID: \"69a86f03-7f6c-48b7-bc6f-c6c432f735ce\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085819 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5r49\" (UniqueName: \"kubernetes.io/projected/383f552e-0d7a-4c2e-8931-1e0605d309e2-kube-api-access-j5r49\") pod \"barbican-operator-controller-manager-868647ff47-zlwvz\" (UID: \"383f552e-0d7a-4c2e-8931-1e0605d309e2\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085851 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.085970 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.094501 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-szbcv"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.112753 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.128662 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.130220 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.152897 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gn25m" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.154392 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203027 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmvn\" (UniqueName: \"kubernetes.io/projected/6c462cae-e6f6-4551-a63f-783b5355050d-kube-api-access-xdmvn\") pod \"horizon-operator-controller-manager-5b9b8895d5-2lgdc\" (UID: \"6c462cae-e6f6-4551-a63f-783b5355050d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203075 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7scm\" (UniqueName: \"kubernetes.io/projected/69a86f03-7f6c-48b7-bc6f-c6c432f735ce-kube-api-access-b7scm\") pod \"glance-operator-controller-manager-77987464f4-7vgh9\" (UID: \"69a86f03-7f6c-48b7-bc6f-c6c432f735ce\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203096 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5r49\" (UniqueName: \"kubernetes.io/projected/383f552e-0d7a-4c2e-8931-1e0605d309e2-kube-api-access-j5r49\") pod \"barbican-operator-controller-manager-868647ff47-zlwvz\" (UID: \"383f552e-0d7a-4c2e-8931-1e0605d309e2\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203119 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203138 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv8m9\" (UniqueName: \"kubernetes.io/projected/6b8c8de2-3f25-4adb-9598-3beceb5aab8f-kube-api-access-pv8m9\") pod \"ironic-operator-controller-manager-554564d7fc-jvjnc\" (UID: \"6b8c8de2-3f25-4adb-9598-3beceb5aab8f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203164 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsrlr\" (UniqueName: \"kubernetes.io/projected/47a580d2-e511-4827-bc01-91189c1e34e9-kube-api-access-gsrlr\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203237 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq79z\" (UniqueName: \"kubernetes.io/projected/c8282c68-cc06-4252-be3f-12fd375413d5-kube-api-access-lq79z\") pod \"cinder-operator-controller-manager-5d946d989d-rmvdg\" (UID: \"c8282c68-cc06-4252-be3f-12fd375413d5\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203260 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xtv8\" (UniqueName: \"kubernetes.io/projected/d7e9b369-11d8-4aa9-a3b2-db6b88904b51-kube-api-access-8xtv8\") pod \"designate-operator-controller-manager-6d8bf5c495-64bzs\" (UID: \"d7e9b369-11d8-4aa9-a3b2-db6b88904b51\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.203304 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58qhf\" (UniqueName: \"kubernetes.io/projected/8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325-kube-api-access-58qhf\") pod \"heat-operator-controller-manager-69f49c598c-xf72s\" (UID: \"8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.203880 4748 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.203922 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert podName:47a580d2-e511-4827-bc01-91189c1e34e9 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:14.703907022 +0000 UTC m=+1040.395576051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert") pod "infra-operator-controller-manager-79d975b745-szbcv" (UID: "47a580d2-e511-4827-bc01-91189c1e34e9") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.210805 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.211741 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.215987 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.217746 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.223068 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-h2mzv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.228442 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-sxq8v" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.238979 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.239916 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.241112 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmvn\" (UniqueName: \"kubernetes.io/projected/6c462cae-e6f6-4551-a63f-783b5355050d-kube-api-access-xdmvn\") pod \"horizon-operator-controller-manager-5b9b8895d5-2lgdc\" (UID: \"6c462cae-e6f6-4551-a63f-783b5355050d\") " pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.242328 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsrlr\" (UniqueName: \"kubernetes.io/projected/47a580d2-e511-4827-bc01-91189c1e34e9-kube-api-access-gsrlr\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.242677 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-54z6n" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.247884 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7scm\" (UniqueName: \"kubernetes.io/projected/69a86f03-7f6c-48b7-bc6f-c6c432f735ce-kube-api-access-b7scm\") pod \"glance-operator-controller-manager-77987464f4-7vgh9\" (UID: \"69a86f03-7f6c-48b7-bc6f-c6c432f735ce\") " pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.250562 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.255051 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5r49\" (UniqueName: \"kubernetes.io/projected/383f552e-0d7a-4c2e-8931-1e0605d309e2-kube-api-access-j5r49\") pod \"barbican-operator-controller-manager-868647ff47-zlwvz\" (UID: \"383f552e-0d7a-4c2e-8931-1e0605d309e2\") " pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.256357 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58qhf\" (UniqueName: \"kubernetes.io/projected/8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325-kube-api-access-58qhf\") pod \"heat-operator-controller-manager-69f49c598c-xf72s\" (UID: \"8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325\") " pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.257422 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq79z\" (UniqueName: \"kubernetes.io/projected/c8282c68-cc06-4252-be3f-12fd375413d5-kube-api-access-lq79z\") pod \"cinder-operator-controller-manager-5d946d989d-rmvdg\" (UID: \"c8282c68-cc06-4252-be3f-12fd375413d5\") " pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.257845 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.259278 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv8m9\" (UniqueName: \"kubernetes.io/projected/6b8c8de2-3f25-4adb-9598-3beceb5aab8f-kube-api-access-pv8m9\") pod \"ironic-operator-controller-manager-554564d7fc-jvjnc\" (UID: \"6b8c8de2-3f25-4adb-9598-3beceb5aab8f\") " pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.267158 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.268065 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xtv8\" (UniqueName: \"kubernetes.io/projected/d7e9b369-11d8-4aa9-a3b2-db6b88904b51-kube-api-access-8xtv8\") pod \"designate-operator-controller-manager-6d8bf5c495-64bzs\" (UID: \"d7e9b369-11d8-4aa9-a3b2-db6b88904b51\") " pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.274689 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.302765 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.308487 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nlwq\" (UniqueName: \"kubernetes.io/projected/08a4c7e1-1e32-4f6e-8fdc-d622dbe06059-kube-api-access-7nlwq\") pod \"keystone-operator-controller-manager-b4d948c87-ss7f5\" (UID: \"08a4c7e1-1e32-4f6e-8fdc-d622dbe06059\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.315903 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.316828 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.319955 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-bhp6f" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.325506 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.326811 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.328931 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-n8xcz" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.330412 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.340938 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.348954 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.349441 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.359353 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.360413 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.363780 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-rvzlt" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.364054 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.371024 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.374377 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.375609 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.380764 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.381680 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.386054 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-mb7gm" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.386444 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.386921 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.391284 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-k6tmr" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.392235 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.407387 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q95f2"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.408523 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409562 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbth4\" (UniqueName: \"kubernetes.io/projected/f39955d7-4055-4a9d-8c21-eafa5ddd3f7f-kube-api-access-sbth4\") pod \"mariadb-operator-controller-manager-6994f66f48-594mx\" (UID: \"f39955d7-4055-4a9d-8c21-eafa5ddd3f7f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409604 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f9hf\" (UniqueName: \"kubernetes.io/projected/e7c3328c-8c35-4dab-8082-d7ee6d6c53f5-kube-api-access-2f9hf\") pod \"nova-operator-controller-manager-567668f5cf-fk48d\" (UID: \"e7c3328c-8c35-4dab-8082-d7ee6d6c53f5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409651 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8tc8\" (UniqueName: \"kubernetes.io/projected/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-kube-api-access-r8tc8\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409676 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409703 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nlwq\" (UniqueName: \"kubernetes.io/projected/08a4c7e1-1e32-4f6e-8fdc-d622dbe06059-kube-api-access-7nlwq\") pod \"keystone-operator-controller-manager-b4d948c87-ss7f5\" (UID: \"08a4c7e1-1e32-4f6e-8fdc-d622dbe06059\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409739 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26gz\" (UniqueName: \"kubernetes.io/projected/a6eb6394-3349-4a90-bf7a-6677191f0c5a-kube-api-access-n26gz\") pod \"octavia-operator-controller-manager-69f8888797-99tg5\" (UID: \"a6eb6394-3349-4a90-bf7a-6677191f0c5a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409769 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npp68\" (UniqueName: \"kubernetes.io/projected/5e9f0b4c-6645-4cc6-ad91-043721d84e74-kube-api-access-npp68\") pod \"placement-operator-controller-manager-8497b45c89-bzrm7\" (UID: \"5e9f0b4c-6645-4cc6-ad91-043721d84e74\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409802 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljc8m\" (UniqueName: \"kubernetes.io/projected/aadfc2ec-ea6d-440c-9c0d-d5005e39230c-kube-api-access-ljc8m\") pod \"manila-operator-controller-manager-54f6768c69-tlr6k\" (UID: \"aadfc2ec-ea6d-440c-9c0d-d5005e39230c\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409821 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mpc6\" (UniqueName: \"kubernetes.io/projected/77cc0c29-605d-46d3-98a8-f9aeecbe888b-kube-api-access-2mpc6\") pod \"ovn-operator-controller-manager-d44cf6b75-wjnt6\" (UID: \"77cc0c29-605d-46d3-98a8-f9aeecbe888b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.409842 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76r4n\" (UniqueName: \"kubernetes.io/projected/2e85817e-216a-4784-880a-f433c52032af-kube-api-access-76r4n\") pod \"neutron-operator-controller-manager-64ddbf8bb-z478p\" (UID: \"2e85817e-216a-4784-880a-f433c52032af\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.410665 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-shn7v" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.414102 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q95f2"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.432097 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nlwq\" (UniqueName: \"kubernetes.io/projected/08a4c7e1-1e32-4f6e-8fdc-d622dbe06059-kube-api-access-7nlwq\") pod \"keystone-operator-controller-manager-b4d948c87-ss7f5\" (UID: \"08a4c7e1-1e32-4f6e-8fdc-d622dbe06059\") " pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.433443 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.435340 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.437549 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-t7s4f" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.454389 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.479309 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.499564 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pktjs"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.501531 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.509632 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-qjphh" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.514486 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pktjs"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515030 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8tc8\" (UniqueName: \"kubernetes.io/projected/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-kube-api-access-r8tc8\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515108 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqsn\" (UniqueName: \"kubernetes.io/projected/ada66d46-4901-45cc-9b08-a3578fadfda0-kube-api-access-csqsn\") pod \"swift-operator-controller-manager-68f46476f-q95f2\" (UID: \"ada66d46-4901-45cc-9b08-a3578fadfda0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515165 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515199 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n26gz\" (UniqueName: \"kubernetes.io/projected/a6eb6394-3349-4a90-bf7a-6677191f0c5a-kube-api-access-n26gz\") pod \"octavia-operator-controller-manager-69f8888797-99tg5\" (UID: \"a6eb6394-3349-4a90-bf7a-6677191f0c5a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515334 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srpv4\" (UniqueName: \"kubernetes.io/projected/5a824236-f0aa-4b02-8357-0c8275fa6509-kube-api-access-srpv4\") pod \"telemetry-operator-controller-manager-6ccb9b958b-7h287\" (UID: \"5a824236-f0aa-4b02-8357-0c8275fa6509\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515370 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npp68\" (UniqueName: \"kubernetes.io/projected/5e9f0b4c-6645-4cc6-ad91-043721d84e74-kube-api-access-npp68\") pod \"placement-operator-controller-manager-8497b45c89-bzrm7\" (UID: \"5e9f0b4c-6645-4cc6-ad91-043721d84e74\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515439 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljc8m\" (UniqueName: \"kubernetes.io/projected/aadfc2ec-ea6d-440c-9c0d-d5005e39230c-kube-api-access-ljc8m\") pod \"manila-operator-controller-manager-54f6768c69-tlr6k\" (UID: \"aadfc2ec-ea6d-440c-9c0d-d5005e39230c\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515489 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mpc6\" (UniqueName: \"kubernetes.io/projected/77cc0c29-605d-46d3-98a8-f9aeecbe888b-kube-api-access-2mpc6\") pod \"ovn-operator-controller-manager-d44cf6b75-wjnt6\" (UID: \"77cc0c29-605d-46d3-98a8-f9aeecbe888b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515518 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76r4n\" (UniqueName: \"kubernetes.io/projected/2e85817e-216a-4784-880a-f433c52032af-kube-api-access-76r4n\") pod \"neutron-operator-controller-manager-64ddbf8bb-z478p\" (UID: \"2e85817e-216a-4784-880a-f433c52032af\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515600 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5fjn\" (UniqueName: \"kubernetes.io/projected/26e5a91e-b0ec-44ff-bcb7-edebf76310ce-kube-api-access-m5fjn\") pod \"test-operator-controller-manager-7866795846-pktjs\" (UID: \"26e5a91e-b0ec-44ff-bcb7-edebf76310ce\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515652 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbth4\" (UniqueName: \"kubernetes.io/projected/f39955d7-4055-4a9d-8c21-eafa5ddd3f7f-kube-api-access-sbth4\") pod \"mariadb-operator-controller-manager-6994f66f48-594mx\" (UID: \"f39955d7-4055-4a9d-8c21-eafa5ddd3f7f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.515685 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f9hf\" (UniqueName: \"kubernetes.io/projected/e7c3328c-8c35-4dab-8082-d7ee6d6c53f5-kube-api-access-2f9hf\") pod \"nova-operator-controller-manager-567668f5cf-fk48d\" (UID: \"e7c3328c-8c35-4dab-8082-d7ee6d6c53f5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.518209 4748 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.518264 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert podName:58388a3c-6479-40b7-a5cb-4d83fc2a38b3 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:15.018248313 +0000 UTC m=+1040.709917352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" (UID: "58388a3c-6479-40b7-a5cb-4d83fc2a38b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.518856 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.553995 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.599473 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npp68\" (UniqueName: \"kubernetes.io/projected/5e9f0b4c-6645-4cc6-ad91-043721d84e74-kube-api-access-npp68\") pod \"placement-operator-controller-manager-8497b45c89-bzrm7\" (UID: \"5e9f0b4c-6645-4cc6-ad91-043721d84e74\") " pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.609483 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n26gz\" (UniqueName: \"kubernetes.io/projected/a6eb6394-3349-4a90-bf7a-6677191f0c5a-kube-api-access-n26gz\") pod \"octavia-operator-controller-manager-69f8888797-99tg5\" (UID: \"a6eb6394-3349-4a90-bf7a-6677191f0c5a\") " pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.610554 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8tc8\" (UniqueName: \"kubernetes.io/projected/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-kube-api-access-r8tc8\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.618083 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f9hf\" (UniqueName: \"kubernetes.io/projected/e7c3328c-8c35-4dab-8082-d7ee6d6c53f5-kube-api-access-2f9hf\") pod \"nova-operator-controller-manager-567668f5cf-fk48d\" (UID: \"e7c3328c-8c35-4dab-8082-d7ee6d6c53f5\") " pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.620516 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljc8m\" (UniqueName: \"kubernetes.io/projected/aadfc2ec-ea6d-440c-9c0d-d5005e39230c-kube-api-access-ljc8m\") pod \"manila-operator-controller-manager-54f6768c69-tlr6k\" (UID: \"aadfc2ec-ea6d-440c-9c0d-d5005e39230c\") " pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.624639 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76r4n\" (UniqueName: \"kubernetes.io/projected/2e85817e-216a-4784-880a-f433c52032af-kube-api-access-76r4n\") pod \"neutron-operator-controller-manager-64ddbf8bb-z478p\" (UID: \"2e85817e-216a-4784-880a-f433c52032af\") " pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.631189 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbth4\" (UniqueName: \"kubernetes.io/projected/f39955d7-4055-4a9d-8c21-eafa5ddd3f7f-kube-api-access-sbth4\") pod \"mariadb-operator-controller-manager-6994f66f48-594mx\" (UID: \"f39955d7-4055-4a9d-8c21-eafa5ddd3f7f\") " pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.636723 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5fjn\" (UniqueName: \"kubernetes.io/projected/26e5a91e-b0ec-44ff-bcb7-edebf76310ce-kube-api-access-m5fjn\") pod \"test-operator-controller-manager-7866795846-pktjs\" (UID: \"26e5a91e-b0ec-44ff-bcb7-edebf76310ce\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.636896 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csqsn\" (UniqueName: \"kubernetes.io/projected/ada66d46-4901-45cc-9b08-a3578fadfda0-kube-api-access-csqsn\") pod \"swift-operator-controller-manager-68f46476f-q95f2\" (UID: \"ada66d46-4901-45cc-9b08-a3578fadfda0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.637010 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srpv4\" (UniqueName: \"kubernetes.io/projected/5a824236-f0aa-4b02-8357-0c8275fa6509-kube-api-access-srpv4\") pod \"telemetry-operator-controller-manager-6ccb9b958b-7h287\" (UID: \"5a824236-f0aa-4b02-8357-0c8275fa6509\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.637848 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mpc6\" (UniqueName: \"kubernetes.io/projected/77cc0c29-605d-46d3-98a8-f9aeecbe888b-kube-api-access-2mpc6\") pod \"ovn-operator-controller-manager-d44cf6b75-wjnt6\" (UID: \"77cc0c29-605d-46d3-98a8-f9aeecbe888b\") " pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.656862 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csqsn\" (UniqueName: \"kubernetes.io/projected/ada66d46-4901-45cc-9b08-a3578fadfda0-kube-api-access-csqsn\") pod \"swift-operator-controller-manager-68f46476f-q95f2\" (UID: \"ada66d46-4901-45cc-9b08-a3578fadfda0\") " pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.660683 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.661524 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.663100 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.666215 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w6rnr" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.666302 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5fjn\" (UniqueName: \"kubernetes.io/projected/26e5a91e-b0ec-44ff-bcb7-edebf76310ce-kube-api-access-m5fjn\") pod \"test-operator-controller-manager-7866795846-pktjs\" (UID: \"26e5a91e-b0ec-44ff-bcb7-edebf76310ce\") " pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.667906 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srpv4\" (UniqueName: \"kubernetes.io/projected/5a824236-f0aa-4b02-8357-0c8275fa6509-kube-api-access-srpv4\") pod \"telemetry-operator-controller-manager-6ccb9b958b-7h287\" (UID: \"5a824236-f0aa-4b02-8357-0c8275fa6509\") " pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.690662 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.708633 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.726655 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.727980 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.731916 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qz4ff" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.732119 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.732233 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.740039 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.741418 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.741524 4748 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.741571 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert podName:47a580d2-e511-4827-bc01-91189c1e34e9 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:15.741553218 +0000 UTC m=+1041.433222257 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert") pod "infra-operator-controller-manager-79d975b745-szbcv" (UID: "47a580d2-e511-4827-bc01-91189c1e34e9") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.761306 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.762667 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.771881 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.772793 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.773432 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.778524 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-xvhjw" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.784615 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.796120 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.806415 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.840979 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.853193 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.853269 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgbbz\" (UniqueName: \"kubernetes.io/projected/240e408e-e2ef-4375-a604-f5b29fc5bdfc-kube-api-access-zgbbz\") pod \"watcher-operator-controller-manager-5db88f68c-8g4n4\" (UID: \"240e408e-e2ef-4375-a604-f5b29fc5bdfc\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.853330 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.853367 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9f8\" (UniqueName: \"kubernetes.io/projected/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-kube-api-access-rm9f8\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.853451 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj48q\" (UniqueName: \"kubernetes.io/projected/585bdba9-5fef-469b-a5a2-8b4a15719360-kube-api-access-bj48q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zw2pr\" (UID: \"585bdba9-5fef-469b-a5a2-8b4a15719360\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.859259 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.875557 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.928545 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg"] Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.957829 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj48q\" (UniqueName: \"kubernetes.io/projected/585bdba9-5fef-469b-a5a2-8b4a15719360-kube-api-access-bj48q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zw2pr\" (UID: \"585bdba9-5fef-469b-a5a2-8b4a15719360\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.957917 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.957954 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgbbz\" (UniqueName: \"kubernetes.io/projected/240e408e-e2ef-4375-a604-f5b29fc5bdfc-kube-api-access-zgbbz\") pod \"watcher-operator-controller-manager-5db88f68c-8g4n4\" (UID: \"240e408e-e2ef-4375-a604-f5b29fc5bdfc\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.957984 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:14 crc kubenswrapper[4748]: I0216 15:10:14.958009 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm9f8\" (UniqueName: \"kubernetes.io/projected/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-kube-api-access-rm9f8\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.959122 4748 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.959163 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:15.459148913 +0000 UTC m=+1041.150817952 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "metrics-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.959328 4748 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:14 crc kubenswrapper[4748]: E0216 15:10:14.959350 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:15.459343258 +0000 UTC m=+1041.151012297 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "webhook-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.009555 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm9f8\" (UniqueName: \"kubernetes.io/projected/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-kube-api-access-rm9f8\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.010761 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj48q\" (UniqueName: \"kubernetes.io/projected/585bdba9-5fef-469b-a5a2-8b4a15719360-kube-api-access-bj48q\") pod \"rabbitmq-cluster-operator-manager-668c99d594-zw2pr\" (UID: \"585bdba9-5fef-469b-a5a2-8b4a15719360\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.013414 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgbbz\" (UniqueName: \"kubernetes.io/projected/240e408e-e2ef-4375-a604-f5b29fc5bdfc-kube-api-access-zgbbz\") pod \"watcher-operator-controller-manager-5db88f68c-8g4n4\" (UID: \"240e408e-e2ef-4375-a604-f5b29fc5bdfc\") " pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.017546 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" Feb 16 15:10:15 crc kubenswrapper[4748]: W0216 15:10:15.033866 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8282c68_cc06_4252_be3f_12fd375413d5.slice/crio-8d78f99b110084a8b50731c326fd3364bf0b934fa16f19008259243b784a9292 WatchSource:0}: Error finding container 8d78f99b110084a8b50731c326fd3364bf0b934fa16f19008259243b784a9292: Status 404 returned error can't find the container with id 8d78f99b110084a8b50731c326fd3364bf0b934fa16f19008259243b784a9292 Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.060000 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.062080 4748 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.062161 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert podName:58388a3c-6479-40b7-a5cb-4d83fc2a38b3 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:16.062142502 +0000 UTC m=+1041.753811541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" (UID: "58388a3c-6479-40b7-a5cb-4d83fc2a38b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.132206 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.157754 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.211314 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.370687 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.465782 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.465924 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.466332 4748 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.466400 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:16.466384111 +0000 UTC m=+1042.158053150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "metrics-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.466561 4748 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.466596 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:16.466588436 +0000 UTC m=+1042.158257476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "webhook-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.630521 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" event={"ID":"8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325","Type":"ContainerStarted","Data":"de416bfcec4376d98c7c5af53cb8dad5cb283e2a54221f727f53b0bd7c073571"} Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.631914 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" event={"ID":"08a4c7e1-1e32-4f6e-8fdc-d622dbe06059","Type":"ContainerStarted","Data":"4db34cf74e31710bf7384ac4e7f09fcd108af319714d823c83ee6ec837735b78"} Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.632991 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" event={"ID":"69a86f03-7f6c-48b7-bc6f-c6c432f735ce","Type":"ContainerStarted","Data":"8b359b2da56bfe4ccf67671bcf340f711246142cee61c18a9d7dc718e1fda8b1"} Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.633820 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" event={"ID":"c8282c68-cc06-4252-be3f-12fd375413d5","Type":"ContainerStarted","Data":"8d78f99b110084a8b50731c326fd3364bf0b934fa16f19008259243b784a9292"} Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.740902 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.751977 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.770523 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.771026 4748 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.771089 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert podName:47a580d2-e511-4827-bc01-91189c1e34e9 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:17.771064335 +0000 UTC m=+1043.462733374 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert") pod "infra-operator-controller-manager-79d975b745-szbcv" (UID: "47a580d2-e511-4827-bc01-91189c1e34e9") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.819926 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.836129 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.845966 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.855503 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.860939 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.926613 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx"] Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.944081 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-76r4n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-64ddbf8bb-z478p_openstack-operators(2e85817e-216a-4784-880a-f433c52032af): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:15 crc kubenswrapper[4748]: E0216 15:10:15.945401 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" podUID="2e85817e-216a-4784-880a-f433c52032af" Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.951134 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.963448 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5"] Feb 16 15:10:15 crc kubenswrapper[4748]: I0216 15:10:15.969225 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287"] Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.080834 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.081012 4748 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.081096 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert podName:58388a3c-6479-40b7-a5cb-4d83fc2a38b3 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:18.08107511 +0000 UTC m=+1043.772744139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" (UID: "58388a3c-6479-40b7-a5cb-4d83fc2a38b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.166699 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68f46476f-q95f2"] Feb 16 15:10:16 crc kubenswrapper[4748]: W0216 15:10:16.177064 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podada66d46_4901_45cc_9b08_a3578fadfda0.slice/crio-9921dec35582c7b73a6967892cf09bffceffb3f8c1ff0c808bb1d191a49b18e2 WatchSource:0}: Error finding container 9921dec35582c7b73a6967892cf09bffceffb3f8c1ff0c808bb1d191a49b18e2: Status 404 returned error can't find the container with id 9921dec35582c7b73a6967892cf09bffceffb3f8c1ff0c808bb1d191a49b18e2 Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.178235 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d"] Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.190824 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7866795846-pktjs"] Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.196988 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr"] Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.204328 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4"] Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.252828 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bj48q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-zw2pr_openstack-operators(585bdba9-5fef-469b-a5a2-8b4a15719360): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.252960 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2f9hf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-567668f5cf-fk48d_openstack-operators(e7c3328c-8c35-4dab-8082-d7ee6d6c53f5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.254010 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" podUID="e7c3328c-8c35-4dab-8082-d7ee6d6c53f5" Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.254047 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" podUID="585bdba9-5fef-469b-a5a2-8b4a15719360" Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.259273 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zgbbz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5db88f68c-8g4n4_openstack-operators(240e408e-e2ef-4375-a604-f5b29fc5bdfc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.260823 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" podUID="240e408e-e2ef-4375-a604-f5b29fc5bdfc" Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.486322 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.486448 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.486581 4748 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.486627 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:18.486613841 +0000 UTC m=+1044.178282880 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "metrics-server-cert" not found Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.486669 4748 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.486689 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:18.486684012 +0000 UTC m=+1044.178353051 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "webhook-server-cert" not found Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.682187 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" event={"ID":"5a824236-f0aa-4b02-8357-0c8275fa6509","Type":"ContainerStarted","Data":"090dbcc9d8a906b6b65c55b5a1b92f0877b7ab16c558d89af4c86a1e0b66a03b"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.685857 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" event={"ID":"6b8c8de2-3f25-4adb-9598-3beceb5aab8f","Type":"ContainerStarted","Data":"5e3473a54171a9e40e0216836dce949414e1aa6a0778c65a6deecccaf9878a42"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.688348 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" event={"ID":"585bdba9-5fef-469b-a5a2-8b4a15719360","Type":"ContainerStarted","Data":"554a43623f0416ae544c918729cf22485741062bb12f09c143d6585c94d10e92"} Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.690012 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" podUID="585bdba9-5fef-469b-a5a2-8b4a15719360" Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.690850 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" event={"ID":"6c462cae-e6f6-4551-a63f-783b5355050d","Type":"ContainerStarted","Data":"d96c9d6da8ca97b33469fb4e88fab6f6871d328b5de20ccc0731e071734d3317"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.693025 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" event={"ID":"77cc0c29-605d-46d3-98a8-f9aeecbe888b","Type":"ContainerStarted","Data":"421a8e6d1ee8e4378e7f23491243661c7070d5d3b1c104101757d92b37b8d9d2"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.695296 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" event={"ID":"383f552e-0d7a-4c2e-8931-1e0605d309e2","Type":"ContainerStarted","Data":"448cc938bf3f64a8bb567f0ed5e6a18ac9fa71a8af8f08b0a94b528f2156083c"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.697100 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" event={"ID":"e7c3328c-8c35-4dab-8082-d7ee6d6c53f5","Type":"ContainerStarted","Data":"d22ca5784877aa9bf4c5b3b87aabc156a33ebfca1e9b3b9295230d8b99b1c2da"} Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.698779 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" podUID="e7c3328c-8c35-4dab-8082-d7ee6d6c53f5" Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.698870 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" event={"ID":"ada66d46-4901-45cc-9b08-a3578fadfda0","Type":"ContainerStarted","Data":"9921dec35582c7b73a6967892cf09bffceffb3f8c1ff0c808bb1d191a49b18e2"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.712077 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" event={"ID":"5e9f0b4c-6645-4cc6-ad91-043721d84e74","Type":"ContainerStarted","Data":"29d0232a3bc737baa9f845d1b20d2a3ff2873a349823a9eee543e44077c499f8"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.715108 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" event={"ID":"240e408e-e2ef-4375-a604-f5b29fc5bdfc","Type":"ContainerStarted","Data":"a6e17450fa7a675aacae2035452d2f3ab6233defb53e0f60b49bff01f967d7b5"} Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.720383 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" podUID="240e408e-e2ef-4375-a604-f5b29fc5bdfc" Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.739210 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" event={"ID":"a6eb6394-3349-4a90-bf7a-6677191f0c5a","Type":"ContainerStarted","Data":"da8d4ba856ef3bbe84f1e25f5c20e924df0f7b5b596d8d9cd8431bc4cd40c009"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.741452 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" event={"ID":"d7e9b369-11d8-4aa9-a3b2-db6b88904b51","Type":"ContainerStarted","Data":"110c550898551d95d502d120152f104ed70069bcbcb15f00d44e52683b1515b5"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.743346 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" event={"ID":"f39955d7-4055-4a9d-8c21-eafa5ddd3f7f","Type":"ContainerStarted","Data":"a9ccab1ca6a2b1b1dff834c3646c66f65885fb3edd8bec1d8bdb2316e107e6b1"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.744993 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" event={"ID":"aadfc2ec-ea6d-440c-9c0d-d5005e39230c","Type":"ContainerStarted","Data":"55045eaea4ba927eb381c68406d163c06cc880bf8cb6305c095fb68ffee4ed1b"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.751742 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" event={"ID":"26e5a91e-b0ec-44ff-bcb7-edebf76310ce","Type":"ContainerStarted","Data":"962444e1cd4770120e617f50d30ba41e93831246db4d0a092e198aec50526436"} Feb 16 15:10:16 crc kubenswrapper[4748]: I0216 15:10:16.771151 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" event={"ID":"2e85817e-216a-4784-880a-f433c52032af","Type":"ContainerStarted","Data":"844d417aa5d986ae998d521ad3858226e3ee8249788f8f4674e7d2714c64fd35"} Feb 16 15:10:16 crc kubenswrapper[4748]: E0216 15:10:16.774407 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" podUID="2e85817e-216a-4784-880a-f433c52032af" Feb 16 15:10:17 crc kubenswrapper[4748]: E0216 15:10:17.803077 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:e4689246ae78635dc3c1db9c677d8b16b8f94276df15fb9c84bfc57cc6578fcf\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" podUID="2e85817e-216a-4784-880a-f433c52032af" Feb 16 15:10:17 crc kubenswrapper[4748]: E0216 15:10:17.803735 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:fe85dd595906fac0fe1e7a42215bb306a963cf87d55e07cd2573726b690b2838\\\"\"" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" podUID="e7c3328c-8c35-4dab-8082-d7ee6d6c53f5" Feb 16 15:10:17 crc kubenswrapper[4748]: E0216 15:10:17.803798 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:d01ae848290e880c09127d5297418dea40fc7f090fdab9bf2c578c7e7f53aec0\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" podUID="240e408e-e2ef-4375-a604-f5b29fc5bdfc" Feb 16 15:10:17 crc kubenswrapper[4748]: I0216 15:10:17.810863 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:17 crc kubenswrapper[4748]: E0216 15:10:17.811005 4748 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:17 crc kubenswrapper[4748]: E0216 15:10:17.811047 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert podName:47a580d2-e511-4827-bc01-91189c1e34e9 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:21.811035051 +0000 UTC m=+1047.502704090 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert") pod "infra-operator-controller-manager-79d975b745-szbcv" (UID: "47a580d2-e511-4827-bc01-91189c1e34e9") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:17 crc kubenswrapper[4748]: E0216 15:10:17.827974 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" podUID="585bdba9-5fef-469b-a5a2-8b4a15719360" Feb 16 15:10:18 crc kubenswrapper[4748]: I0216 15:10:18.126406 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:18 crc kubenswrapper[4748]: E0216 15:10:18.127737 4748 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:18 crc kubenswrapper[4748]: E0216 15:10:18.128537 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert podName:58388a3c-6479-40b7-a5cb-4d83fc2a38b3 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:22.128494188 +0000 UTC m=+1047.820163227 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" (UID: "58388a3c-6479-40b7-a5cb-4d83fc2a38b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:18 crc kubenswrapper[4748]: I0216 15:10:18.533091 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:18 crc kubenswrapper[4748]: I0216 15:10:18.533171 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:18 crc kubenswrapper[4748]: E0216 15:10:18.533291 4748 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:18 crc kubenswrapper[4748]: E0216 15:10:18.533346 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:22.533330342 +0000 UTC m=+1048.224999381 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "webhook-server-cert" not found Feb 16 15:10:18 crc kubenswrapper[4748]: E0216 15:10:18.533681 4748 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:18 crc kubenswrapper[4748]: E0216 15:10:18.533729 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:22.533701371 +0000 UTC m=+1048.225370410 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "metrics-server-cert" not found Feb 16 15:10:21 crc kubenswrapper[4748]: I0216 15:10:21.890654 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:21 crc kubenswrapper[4748]: E0216 15:10:21.891331 4748 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:21 crc kubenswrapper[4748]: E0216 15:10:21.891669 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert podName:47a580d2-e511-4827-bc01-91189c1e34e9 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:29.891627019 +0000 UTC m=+1055.583296128 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert") pod "infra-operator-controller-manager-79d975b745-szbcv" (UID: "47a580d2-e511-4827-bc01-91189c1e34e9") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:22 crc kubenswrapper[4748]: I0216 15:10:22.195668 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:22 crc kubenswrapper[4748]: E0216 15:10:22.195961 4748 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:22 crc kubenswrapper[4748]: E0216 15:10:22.196108 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert podName:58388a3c-6479-40b7-a5cb-4d83fc2a38b3 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:30.196074377 +0000 UTC m=+1055.887743406 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" (UID: "58388a3c-6479-40b7-a5cb-4d83fc2a38b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:22 crc kubenswrapper[4748]: I0216 15:10:22.603309 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:22 crc kubenswrapper[4748]: I0216 15:10:22.603385 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:22 crc kubenswrapper[4748]: E0216 15:10:22.603475 4748 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:22 crc kubenswrapper[4748]: E0216 15:10:22.603524 4748 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:22 crc kubenswrapper[4748]: E0216 15:10:22.603550 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:30.603531775 +0000 UTC m=+1056.295200814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "metrics-server-cert" not found Feb 16 15:10:22 crc kubenswrapper[4748]: E0216 15:10:22.603575 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:30.603558616 +0000 UTC m=+1056.295227655 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "webhook-server-cert" not found Feb 16 15:10:28 crc kubenswrapper[4748]: E0216 15:10:28.863429 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da" Feb 16 15:10:28 crc kubenswrapper[4748]: E0216 15:10:28.864198 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdmvn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5b9b8895d5-2lgdc_openstack-operators(6c462cae-e6f6-4551-a63f-783b5355050d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:28 crc kubenswrapper[4748]: E0216 15:10:28.865591 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" podUID="6c462cae-e6f6-4551-a63f-783b5355050d" Feb 16 15:10:28 crc kubenswrapper[4748]: E0216 15:10:28.887211 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:9f2e1299d908411457e53b49e1062265d2b9d76f6719db24d1be9347c388e4da\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" podUID="6c462cae-e6f6-4551-a63f-783b5355050d" Feb 16 15:10:29 crc kubenswrapper[4748]: I0216 15:10:29.926938 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:29 crc kubenswrapper[4748]: E0216 15:10:29.927099 4748 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:29 crc kubenswrapper[4748]: E0216 15:10:29.927193 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert podName:47a580d2-e511-4827-bc01-91189c1e34e9 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:45.92717123 +0000 UTC m=+1071.618840259 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert") pod "infra-operator-controller-manager-79d975b745-szbcv" (UID: "47a580d2-e511-4827-bc01-91189c1e34e9") : secret "infra-operator-webhook-server-cert" not found Feb 16 15:10:30 crc kubenswrapper[4748]: I0216 15:10:30.236970 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.238227 4748 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.238371 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert podName:58388a3c-6479-40b7-a5cb-4d83fc2a38b3 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:46.238348773 +0000 UTC m=+1071.930017802 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" (UID: "58388a3c-6479-40b7-a5cb-4d83fc2a38b3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 16 15:10:30 crc kubenswrapper[4748]: I0216 15:10:30.645579 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.645739 4748 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.645822 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:46.645801741 +0000 UTC m=+1072.337470780 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "metrics-server-cert" not found Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.645864 4748 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.645938 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs podName:b0e6e37b-e4a2-4013-9678-7412c55e0fd0 nodeName:}" failed. No retries permitted until 2026-02-16 15:10:46.645916034 +0000 UTC m=+1072.337585093 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs") pod "openstack-operator-controller-manager-5b45b684f5-xhrt2" (UID: "b0e6e37b-e4a2-4013-9678-7412c55e0fd0") : secret "webhook-server-cert" not found Feb 16 15:10:30 crc kubenswrapper[4748]: I0216 15:10:30.645747 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.737551 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979" Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.737774 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lq79z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-5d946d989d-rmvdg_openstack-operators(c8282c68-cc06-4252-be3f-12fd375413d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.739084 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" podUID="c8282c68-cc06-4252-be3f-12fd375413d5" Feb 16 15:10:30 crc kubenswrapper[4748]: E0216 15:10:30.901334 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:2b8ab3063af4aaeed0198197aae6f391c6647ac686c94c85668537f1d5933979\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" podUID="c8282c68-cc06-4252-be3f-12fd375413d5" Feb 16 15:10:31 crc kubenswrapper[4748]: E0216 15:10:31.327673 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a" Feb 16 15:10:31 crc kubenswrapper[4748]: E0216 15:10:31.327945 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sbth4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6994f66f48-594mx_openstack-operators(f39955d7-4055-4a9d-8c21-eafa5ddd3f7f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:31 crc kubenswrapper[4748]: E0216 15:10:31.329165 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" podUID="f39955d7-4055-4a9d-8c21-eafa5ddd3f7f" Feb 16 15:10:31 crc kubenswrapper[4748]: E0216 15:10:31.907411 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:a18f12497b7159b100fcfd72c7ba2273d0669a5c00600a9ff1333bca028f256a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" podUID="f39955d7-4055-4a9d-8c21-eafa5ddd3f7f" Feb 16 15:10:32 crc kubenswrapper[4748]: E0216 15:10:32.079595 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1" Feb 16 15:10:32 crc kubenswrapper[4748]: E0216 15:10:32.079793 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7nlwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b4d948c87-ss7f5_openstack-operators(08a4c7e1-1e32-4f6e-8fdc-d622dbe06059): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:10:32 crc kubenswrapper[4748]: E0216 15:10:32.081003 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" podUID="08a4c7e1-1e32-4f6e-8fdc-d622dbe06059" Feb 16 15:10:32 crc kubenswrapper[4748]: E0216 15:10:32.926235 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:c6ad383f55f955902b074d1ee947a2233a5fcbf40698479ae693ce056c80dcc1\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" podUID="08a4c7e1-1e32-4f6e-8fdc-d622dbe06059" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.929497 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" event={"ID":"69a86f03-7f6c-48b7-bc6f-c6c432f735ce","Type":"ContainerStarted","Data":"b5c4fb01cbde83eb25c42cf73f75bf62c3e338fe56308912175f8955824a8cc1"} Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.930879 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.938879 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" event={"ID":"77cc0c29-605d-46d3-98a8-f9aeecbe888b","Type":"ContainerStarted","Data":"2377b42320181f5fb6ba6ca7afd2996ddcf2633905f89dce7c1f2322bb94206f"} Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.939534 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.943347 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" event={"ID":"5a824236-f0aa-4b02-8357-0c8275fa6509","Type":"ContainerStarted","Data":"8c4738058c1b1072c1a5d0cd3aff26acc01801e935fdba8f8e3eac3b75a48056"} Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.944018 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.957560 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" podStartSLOduration=2.820808326 podStartE2EDuration="20.957534164s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.18461237 +0000 UTC m=+1040.876281409" lastFinishedPulling="2026-02-16 15:10:33.321338208 +0000 UTC m=+1059.013007247" observedRunningTime="2026-02-16 15:10:33.95369135 +0000 UTC m=+1059.645360399" watchObservedRunningTime="2026-02-16 15:10:33.957534164 +0000 UTC m=+1059.649203203" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.957697 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" event={"ID":"383f552e-0d7a-4c2e-8931-1e0605d309e2","Type":"ContainerStarted","Data":"eeb0135d66f9c51f1146627e35f97e6c2a55fd5be737dee0e7373db61a56dbbf"} Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.960397 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.961379 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" event={"ID":"e7c3328c-8c35-4dab-8082-d7ee6d6c53f5","Type":"ContainerStarted","Data":"a04e170112e049b5031440fc5996a539569b11e9111473689a86a1bde59d36c3"} Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.961827 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.964700 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" event={"ID":"26e5a91e-b0ec-44ff-bcb7-edebf76310ce","Type":"ContainerStarted","Data":"6aefac4cbbec2b49314bd6269d519a1114ce1a2205efb2245ed353ff2ab2654e"} Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.965223 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" Feb 16 15:10:33 crc kubenswrapper[4748]: I0216 15:10:33.984580 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" podStartSLOduration=3.385738015 podStartE2EDuration="19.984566588s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.943567232 +0000 UTC m=+1041.635236271" lastFinishedPulling="2026-02-16 15:10:32.542395785 +0000 UTC m=+1058.234064844" observedRunningTime="2026-02-16 15:10:33.982515958 +0000 UTC m=+1059.674184997" watchObservedRunningTime="2026-02-16 15:10:33.984566588 +0000 UTC m=+1059.676235617" Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.020550 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" podStartSLOduration=3.407764276 podStartE2EDuration="20.020529432s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.929445345 +0000 UTC m=+1041.621114384" lastFinishedPulling="2026-02-16 15:10:32.542210501 +0000 UTC m=+1058.233879540" observedRunningTime="2026-02-16 15:10:34.014279658 +0000 UTC m=+1059.705948697" watchObservedRunningTime="2026-02-16 15:10:34.020529432 +0000 UTC m=+1059.712198471" Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.088825 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" podStartSLOduration=4.825223461 podStartE2EDuration="21.088808559s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.791441126 +0000 UTC m=+1041.483110165" lastFinishedPulling="2026-02-16 15:10:32.055026224 +0000 UTC m=+1057.746695263" observedRunningTime="2026-02-16 15:10:34.08477792 +0000 UTC m=+1059.776446959" watchObservedRunningTime="2026-02-16 15:10:34.088808559 +0000 UTC m=+1059.780477598" Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.089240 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" podStartSLOduration=2.952585386 podStartE2EDuration="20.089234809s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:16.25288371 +0000 UTC m=+1041.944552749" lastFinishedPulling="2026-02-16 15:10:33.389533133 +0000 UTC m=+1059.081202172" observedRunningTime="2026-02-16 15:10:34.048329675 +0000 UTC m=+1059.739998714" watchObservedRunningTime="2026-02-16 15:10:34.089234809 +0000 UTC m=+1059.780903848" Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.117952 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" podStartSLOduration=3.83338805 podStartE2EDuration="20.117936724s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:16.258565479 +0000 UTC m=+1041.950234518" lastFinishedPulling="2026-02-16 15:10:32.543114143 +0000 UTC m=+1058.234783192" observedRunningTime="2026-02-16 15:10:34.116224602 +0000 UTC m=+1059.807893641" watchObservedRunningTime="2026-02-16 15:10:34.117936724 +0000 UTC m=+1059.809605763" Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.984699 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" event={"ID":"8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325","Type":"ContainerStarted","Data":"0a57624424f05d85b9ec0c676e981bb0117fda06f88f645ebb3adf443766d95e"} Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.984775 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.986793 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" event={"ID":"ada66d46-4901-45cc-9b08-a3578fadfda0","Type":"ContainerStarted","Data":"b4e5515e21302f411cddac1552b00fc22780bb27fe6b1f2c20a732b6095935e1"} Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.986921 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.989663 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" event={"ID":"a6eb6394-3349-4a90-bf7a-6677191f0c5a","Type":"ContainerStarted","Data":"fc8fc246d2fc47a956addac283bfec39fc3f135b2e2aa9de66b17d728fe23577"} Feb 16 15:10:34 crc kubenswrapper[4748]: I0216 15:10:34.989755 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.002988 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.003023 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" event={"ID":"d7e9b369-11d8-4aa9-a3b2-db6b88904b51","Type":"ContainerStarted","Data":"32c7afc0cef0a75908e8be858bfd1960c12663533adbc2b5d5c87666fda15a7a"} Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.003042 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.003051 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" event={"ID":"aadfc2ec-ea6d-440c-9c0d-d5005e39230c","Type":"ContainerStarted","Data":"793c3f666fd448e5c6e2be99506a239f8f215b038991ac3d5a81504ac8354bcb"} Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.003060 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" event={"ID":"6b8c8de2-3f25-4adb-9598-3beceb5aab8f","Type":"ContainerStarted","Data":"ebae37568924fceaf52ee37994e68478718e691c5afa5f2e67d8494b64c0e174"} Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.003072 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.005654 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" event={"ID":"5e9f0b4c-6645-4cc6-ad91-043721d84e74","Type":"ContainerStarted","Data":"6f7efa4818389bc6043105e8c8b848d3c0eb9f061a4be04d65dc4caa33df6638"} Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.005803 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.008260 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" event={"ID":"240e408e-e2ef-4375-a604-f5b29fc5bdfc","Type":"ContainerStarted","Data":"c602c0c826e3dde3c9f0a57d17b581c3afe41458db65ae30d5bc3134aba84c46"} Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.011326 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" podStartSLOduration=3.882275849 podStartE2EDuration="22.011307667s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.193045508 +0000 UTC m=+1040.884714547" lastFinishedPulling="2026-02-16 15:10:33.322077326 +0000 UTC m=+1059.013746365" observedRunningTime="2026-02-16 15:10:35.007876873 +0000 UTC m=+1060.699545912" watchObservedRunningTime="2026-02-16 15:10:35.011307667 +0000 UTC m=+1060.702976696" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.046986 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" podStartSLOduration=4.450806185 podStartE2EDuration="21.046969223s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.94388969 +0000 UTC m=+1041.635558729" lastFinishedPulling="2026-02-16 15:10:32.540052688 +0000 UTC m=+1058.231721767" observedRunningTime="2026-02-16 15:10:35.04237849 +0000 UTC m=+1060.734047529" watchObservedRunningTime="2026-02-16 15:10:35.046969223 +0000 UTC m=+1060.738638262" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.071882 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" podStartSLOduration=5.283158807 podStartE2EDuration="22.071859894s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.753519214 +0000 UTC m=+1041.445188253" lastFinishedPulling="2026-02-16 15:10:32.542220301 +0000 UTC m=+1058.233889340" observedRunningTime="2026-02-16 15:10:35.067513527 +0000 UTC m=+1060.759182566" watchObservedRunningTime="2026-02-16 15:10:35.071859894 +0000 UTC m=+1060.763528933" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.099048 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" podStartSLOduration=5.313210405 podStartE2EDuration="22.099025531s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.755016311 +0000 UTC m=+1041.446685350" lastFinishedPulling="2026-02-16 15:10:32.540831437 +0000 UTC m=+1058.232500476" observedRunningTime="2026-02-16 15:10:35.097123865 +0000 UTC m=+1060.788792904" watchObservedRunningTime="2026-02-16 15:10:35.099025531 +0000 UTC m=+1060.790694580" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.193362 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" podStartSLOduration=3.734658265 podStartE2EDuration="21.193343688s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.862937742 +0000 UTC m=+1041.554606781" lastFinishedPulling="2026-02-16 15:10:33.321623165 +0000 UTC m=+1059.013292204" observedRunningTime="2026-02-16 15:10:35.191085163 +0000 UTC m=+1060.882754202" watchObservedRunningTime="2026-02-16 15:10:35.193343688 +0000 UTC m=+1060.885012727" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.196361 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" podStartSLOduration=3.683193211 podStartE2EDuration="21.196351912s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.858171945 +0000 UTC m=+1041.549840984" lastFinishedPulling="2026-02-16 15:10:33.371330656 +0000 UTC m=+1059.062999685" observedRunningTime="2026-02-16 15:10:35.152827973 +0000 UTC m=+1060.844497012" watchObservedRunningTime="2026-02-16 15:10:35.196351912 +0000 UTC m=+1060.888020941" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.212314 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.273376 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" podStartSLOduration=4.915568831 podStartE2EDuration="21.273361593s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:16.184446539 +0000 UTC m=+1041.876115578" lastFinishedPulling="2026-02-16 15:10:32.542239301 +0000 UTC m=+1058.233908340" observedRunningTime="2026-02-16 15:10:35.234401437 +0000 UTC m=+1060.926070476" watchObservedRunningTime="2026-02-16 15:10:35.273361593 +0000 UTC m=+1060.965030632" Feb 16 15:10:35 crc kubenswrapper[4748]: I0216 15:10:35.273984 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" podStartSLOduration=4.11264736 podStartE2EDuration="21.273978299s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:16.259078802 +0000 UTC m=+1041.950747841" lastFinishedPulling="2026-02-16 15:10:33.420409731 +0000 UTC m=+1059.112078780" observedRunningTime="2026-02-16 15:10:35.270318029 +0000 UTC m=+1060.961987068" watchObservedRunningTime="2026-02-16 15:10:35.273978299 +0000 UTC m=+1060.965647338" Feb 16 15:10:39 crc kubenswrapper[4748]: I0216 15:10:39.053490 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" event={"ID":"585bdba9-5fef-469b-a5a2-8b4a15719360","Type":"ContainerStarted","Data":"c696a1eb6e0c765635e9c6b8b6eaca9d2a21b1073283e59c8a03715f3c12fa44"} Feb 16 15:10:39 crc kubenswrapper[4748]: I0216 15:10:39.055562 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" event={"ID":"2e85817e-216a-4784-880a-f433c52032af","Type":"ContainerStarted","Data":"7d27e9285475de43710cebe5626f57b476937ea0e19e624815a4c56ee6d168a1"} Feb 16 15:10:39 crc kubenswrapper[4748]: I0216 15:10:39.055919 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" Feb 16 15:10:39 crc kubenswrapper[4748]: I0216 15:10:39.087653 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-zw2pr" podStartSLOduration=3.048568932 podStartE2EDuration="25.08762388s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:16.252700075 +0000 UTC m=+1041.944369114" lastFinishedPulling="2026-02-16 15:10:38.291755023 +0000 UTC m=+1063.983424062" observedRunningTime="2026-02-16 15:10:39.077135692 +0000 UTC m=+1064.768804771" watchObservedRunningTime="2026-02-16 15:10:39.08762388 +0000 UTC m=+1064.779292939" Feb 16 15:10:39 crc kubenswrapper[4748]: I0216 15:10:39.103943 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" podStartSLOduration=2.769837817 podStartE2EDuration="25.10391673s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.943901381 +0000 UTC m=+1041.635570420" lastFinishedPulling="2026-02-16 15:10:38.277980294 +0000 UTC m=+1063.969649333" observedRunningTime="2026-02-16 15:10:39.100282241 +0000 UTC m=+1064.791951320" watchObservedRunningTime="2026-02-16 15:10:39.10391673 +0000 UTC m=+1064.795585789" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.278669 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987464f4-7vgh9" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.332930 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69f49c598c-xf72s" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.389787 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-554564d7fc-jvjnc" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.521387 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-868647ff47-zlwvz" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.556553 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d8bf5c495-64bzs" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.663376 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-d44cf6b75-wjnt6" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.693976 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-8497b45c89-bzrm7" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.742546 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-54f6768c69-tlr6k" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.778779 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68f46476f-q95f2" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.786949 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64ddbf8bb-z478p" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.813784 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-6ccb9b958b-7h287" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.846328 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-69f8888797-99tg5" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.864098 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7866795846-pktjs" Feb 16 15:10:44 crc kubenswrapper[4748]: I0216 15:10:44.886104 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-567668f5cf-fk48d" Feb 16 15:10:45 crc kubenswrapper[4748]: I0216 15:10:45.215105 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5db88f68c-8g4n4" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.003219 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.011508 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/47a580d2-e511-4827-bc01-91189c1e34e9-cert\") pod \"infra-operator-controller-manager-79d975b745-szbcv\" (UID: \"47a580d2-e511-4827-bc01-91189c1e34e9\") " pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.205904 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.307876 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.313921 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/58388a3c-6479-40b7-a5cb-4d83fc2a38b3-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n\" (UID: \"58388a3c-6479-40b7-a5cb-4d83fc2a38b3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.438271 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.506152 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79d975b745-szbcv"] Feb 16 15:10:46 crc kubenswrapper[4748]: W0216 15:10:46.512445 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod47a580d2_e511_4827_bc01_91189c1e34e9.slice/crio-09157a47fa52325d128f310488de17da4292ce59ab8536e401fd5288dfcbc7b8 WatchSource:0}: Error finding container 09157a47fa52325d128f310488de17da4292ce59ab8536e401fd5288dfcbc7b8: Status 404 returned error can't find the container with id 09157a47fa52325d128f310488de17da4292ce59ab8536e401fd5288dfcbc7b8 Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.707898 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n"] Feb 16 15:10:46 crc kubenswrapper[4748]: W0216 15:10:46.716051 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58388a3c_6479_40b7_a5cb_4d83fc2a38b3.slice/crio-ca2af6edee9fc63c5c8bc01a98848ea8660b1e57d726593f11614521fa834d32 WatchSource:0}: Error finding container ca2af6edee9fc63c5c8bc01a98848ea8660b1e57d726593f11614521fa834d32: Status 404 returned error can't find the container with id ca2af6edee9fc63c5c8bc01a98848ea8660b1e57d726593f11614521fa834d32 Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.726888 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.727000 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.731187 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-metrics-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.731994 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b0e6e37b-e4a2-4013-9678-7412c55e0fd0-webhook-certs\") pod \"openstack-operator-controller-manager-5b45b684f5-xhrt2\" (UID: \"b0e6e37b-e4a2-4013-9678-7412c55e0fd0\") " pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:46 crc kubenswrapper[4748]: I0216 15:10:46.802620 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:47 crc kubenswrapper[4748]: I0216 15:10:47.110774 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" event={"ID":"58388a3c-6479-40b7-a5cb-4d83fc2a38b3","Type":"ContainerStarted","Data":"ca2af6edee9fc63c5c8bc01a98848ea8660b1e57d726593f11614521fa834d32"} Feb 16 15:10:47 crc kubenswrapper[4748]: I0216 15:10:47.114282 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" event={"ID":"47a580d2-e511-4827-bc01-91189c1e34e9","Type":"ContainerStarted","Data":"09157a47fa52325d128f310488de17da4292ce59ab8536e401fd5288dfcbc7b8"} Feb 16 15:10:47 crc kubenswrapper[4748]: I0216 15:10:47.376615 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2"] Feb 16 15:10:47 crc kubenswrapper[4748]: W0216 15:10:47.390431 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0e6e37b_e4a2_4013_9678_7412c55e0fd0.slice/crio-1d9688b1d8071d3bee9d6d5c6464d9e8b180861f0b720b6d5b9467bfa8b81d81 WatchSource:0}: Error finding container 1d9688b1d8071d3bee9d6d5c6464d9e8b180861f0b720b6d5b9467bfa8b81d81: Status 404 returned error can't find the container with id 1d9688b1d8071d3bee9d6d5c6464d9e8b180861f0b720b6d5b9467bfa8b81d81 Feb 16 15:10:48 crc kubenswrapper[4748]: I0216 15:10:48.122639 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" event={"ID":"b0e6e37b-e4a2-4013-9678-7412c55e0fd0","Type":"ContainerStarted","Data":"1d9688b1d8071d3bee9d6d5c6464d9e8b180861f0b720b6d5b9467bfa8b81d81"} Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.152856 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" event={"ID":"f39955d7-4055-4a9d-8c21-eafa5ddd3f7f","Type":"ContainerStarted","Data":"bf12c3f1d8aec92fad7eed22dab88d316eadf582a6b9412d8cc3b31cc9241a2f"} Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.153901 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.155265 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" event={"ID":"08a4c7e1-1e32-4f6e-8fdc-d622dbe06059","Type":"ContainerStarted","Data":"856dc96f880b1fc1c67c146bf8029d8d4c756c1a31947419ca09696f238499bc"} Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.155694 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.158447 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" event={"ID":"6c462cae-e6f6-4551-a63f-783b5355050d","Type":"ContainerStarted","Data":"cd977b81cc8630181615016fb4748cbae924071f3f406d089b0a472f4198f95e"} Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.158643 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.160262 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" event={"ID":"b0e6e37b-e4a2-4013-9678-7412c55e0fd0","Type":"ContainerStarted","Data":"141e794acb9f05c7818ec5257fc8ccfb41b9e29b4043a0fd14dabb81bbf40aba"} Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.160353 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.161516 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" event={"ID":"c8282c68-cc06-4252-be3f-12fd375413d5","Type":"ContainerStarted","Data":"3bb4ed63f75d1a117d1a87efb6f930fb5dae5201e6fa597f5926cd078c8bda71"} Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.162034 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.181886 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" podStartSLOduration=2.257122404 podStartE2EDuration="37.181865141s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.929944348 +0000 UTC m=+1041.621613387" lastFinishedPulling="2026-02-16 15:10:50.854687085 +0000 UTC m=+1076.546356124" observedRunningTime="2026-02-16 15:10:51.177698459 +0000 UTC m=+1076.869367518" watchObservedRunningTime="2026-02-16 15:10:51.181865141 +0000 UTC m=+1076.873534190" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.213453 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" podStartSLOduration=3.265793817 podStartE2EDuration="38.213431787s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.906881491 +0000 UTC m=+1041.598550530" lastFinishedPulling="2026-02-16 15:10:50.854519461 +0000 UTC m=+1076.546188500" observedRunningTime="2026-02-16 15:10:51.207605434 +0000 UTC m=+1076.899274483" watchObservedRunningTime="2026-02-16 15:10:51.213431787 +0000 UTC m=+1076.905100826" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.232051 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" podStartSLOduration=2.430437259 podStartE2EDuration="38.232035124s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.053868749 +0000 UTC m=+1040.745537788" lastFinishedPulling="2026-02-16 15:10:50.855466614 +0000 UTC m=+1076.547135653" observedRunningTime="2026-02-16 15:10:51.2286319 +0000 UTC m=+1076.920300949" watchObservedRunningTime="2026-02-16 15:10:51.232035124 +0000 UTC m=+1076.923704173" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.250984 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" podStartSLOduration=2.809702025 podStartE2EDuration="38.250966489s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:15.413591095 +0000 UTC m=+1041.105260134" lastFinishedPulling="2026-02-16 15:10:50.854855559 +0000 UTC m=+1076.546524598" observedRunningTime="2026-02-16 15:10:51.247466023 +0000 UTC m=+1076.939135062" watchObservedRunningTime="2026-02-16 15:10:51.250966489 +0000 UTC m=+1076.942635518" Feb 16 15:10:51 crc kubenswrapper[4748]: I0216 15:10:51.278201 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" podStartSLOduration=37.278170727 podStartE2EDuration="37.278170727s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:10:51.275112652 +0000 UTC m=+1076.966781691" watchObservedRunningTime="2026-02-16 15:10:51.278170727 +0000 UTC m=+1076.969839766" Feb 16 15:10:54 crc kubenswrapper[4748]: I0216 15:10:54.185127 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" event={"ID":"58388a3c-6479-40b7-a5cb-4d83fc2a38b3","Type":"ContainerStarted","Data":"0e4e35c392ac3fdf273bb1c74dfb5f786a2781875655273194460e3a1f7e4bc3"} Feb 16 15:10:54 crc kubenswrapper[4748]: I0216 15:10:54.186327 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:10:54 crc kubenswrapper[4748]: I0216 15:10:54.187631 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" event={"ID":"47a580d2-e511-4827-bc01-91189c1e34e9","Type":"ContainerStarted","Data":"63d888f42adcc36c6961c819e3ba8994f2fdca0ecc0c21f146a56783eef65104"} Feb 16 15:10:54 crc kubenswrapper[4748]: I0216 15:10:54.187993 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:10:54 crc kubenswrapper[4748]: I0216 15:10:54.227273 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" podStartSLOduration=33.6023493 podStartE2EDuration="40.227237112s" podCreationTimestamp="2026-02-16 15:10:14 +0000 UTC" firstStartedPulling="2026-02-16 15:10:46.719084375 +0000 UTC m=+1072.410753414" lastFinishedPulling="2026-02-16 15:10:53.343972177 +0000 UTC m=+1079.035641226" observedRunningTime="2026-02-16 15:10:54.218896247 +0000 UTC m=+1079.910565296" watchObservedRunningTime="2026-02-16 15:10:54.227237112 +0000 UTC m=+1079.918906191" Feb 16 15:10:54 crc kubenswrapper[4748]: I0216 15:10:54.242029 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" podStartSLOduration=34.384056529 podStartE2EDuration="41.242000305s" podCreationTimestamp="2026-02-16 15:10:13 +0000 UTC" firstStartedPulling="2026-02-16 15:10:46.517000792 +0000 UTC m=+1072.208669831" lastFinishedPulling="2026-02-16 15:10:53.374944528 +0000 UTC m=+1079.066613607" observedRunningTime="2026-02-16 15:10:54.236789037 +0000 UTC m=+1079.928458076" watchObservedRunningTime="2026-02-16 15:10:54.242000305 +0000 UTC m=+1079.933669344" Feb 16 15:10:56 crc kubenswrapper[4748]: I0216 15:10:56.813194 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5b45b684f5-xhrt2" Feb 16 15:11:04 crc kubenswrapper[4748]: I0216 15:11:04.262477 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-5d946d989d-rmvdg" Feb 16 15:11:04 crc kubenswrapper[4748]: I0216 15:11:04.354552 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5b9b8895d5-2lgdc" Feb 16 15:11:04 crc kubenswrapper[4748]: I0216 15:11:04.482195 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b4d948c87-ss7f5" Feb 16 15:11:04 crc kubenswrapper[4748]: I0216 15:11:04.763932 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6994f66f48-594mx" Feb 16 15:11:06 crc kubenswrapper[4748]: I0216 15:11:06.212504 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79d975b745-szbcv" Feb 16 15:11:06 crc kubenswrapper[4748]: I0216 15:11:06.447624 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.617922 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r2qr5"] Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.619936 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.625292 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.625509 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.625694 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.625906 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-bb24w" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.643216 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r2qr5"] Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.685052 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cqnkj"] Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.686243 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.688526 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.699232 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cqnkj"] Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.736385 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvt7g\" (UniqueName: \"kubernetes.io/projected/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-kube-api-access-xvt7g\") pod \"dnsmasq-dns-675f4bcbfc-r2qr5\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.736460 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-config\") pod \"dnsmasq-dns-675f4bcbfc-r2qr5\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.838003 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvt7g\" (UniqueName: \"kubernetes.io/projected/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-kube-api-access-xvt7g\") pod \"dnsmasq-dns-675f4bcbfc-r2qr5\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.838071 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.838098 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-config\") pod \"dnsmasq-dns-675f4bcbfc-r2qr5\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.838168 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-config\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.838212 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wbln\" (UniqueName: \"kubernetes.io/projected/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-kube-api-access-9wbln\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.839584 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-config\") pod \"dnsmasq-dns-675f4bcbfc-r2qr5\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.862812 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvt7g\" (UniqueName: \"kubernetes.io/projected/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-kube-api-access-xvt7g\") pod \"dnsmasq-dns-675f4bcbfc-r2qr5\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.939597 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-config\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.939826 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wbln\" (UniqueName: \"kubernetes.io/projected/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-kube-api-access-9wbln\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.939995 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.940537 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-config\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.941361 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.942439 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:26 crc kubenswrapper[4748]: I0216 15:11:26.959407 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wbln\" (UniqueName: \"kubernetes.io/projected/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-kube-api-access-9wbln\") pod \"dnsmasq-dns-78dd6ddcc-cqnkj\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:27 crc kubenswrapper[4748]: I0216 15:11:27.000456 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:27 crc kubenswrapper[4748]: I0216 15:11:27.432465 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r2qr5"] Feb 16 15:11:27 crc kubenswrapper[4748]: I0216 15:11:27.511267 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cqnkj"] Feb 16 15:11:27 crc kubenswrapper[4748]: W0216 15:11:27.517402 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3dc102_6f1a_4c76_bbde_de2e2da2c9d6.slice/crio-f83aac058e625dbb0aedf792a2db17c3ff356ae4df0b4e6594aa7c2a89fb4e71 WatchSource:0}: Error finding container f83aac058e625dbb0aedf792a2db17c3ff356ae4df0b4e6594aa7c2a89fb4e71: Status 404 returned error can't find the container with id f83aac058e625dbb0aedf792a2db17c3ff356ae4df0b4e6594aa7c2a89fb4e71 Feb 16 15:11:27 crc kubenswrapper[4748]: I0216 15:11:27.558361 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" event={"ID":"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504","Type":"ContainerStarted","Data":"31f3696133127ed2edb4a03a56f3d27c45c1a3e69ff846a5f6a5258f2bfff96f"} Feb 16 15:11:27 crc kubenswrapper[4748]: I0216 15:11:27.559847 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" event={"ID":"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6","Type":"ContainerStarted","Data":"f83aac058e625dbb0aedf792a2db17c3ff356ae4df0b4e6594aa7c2a89fb4e71"} Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.391983 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r2qr5"] Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.426658 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2zvv2"] Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.430612 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.459904 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2zvv2"] Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.590629 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.590786 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trvj2\" (UniqueName: \"kubernetes.io/projected/a288572d-d385-4a03-88d6-d0e4e120f062-kube-api-access-trvj2\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.590825 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-config\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.691781 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trvj2\" (UniqueName: \"kubernetes.io/projected/a288572d-d385-4a03-88d6-d0e4e120f062-kube-api-access-trvj2\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.692158 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-config\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.692197 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.693157 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-dns-svc\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.693231 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-config\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.718847 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trvj2\" (UniqueName: \"kubernetes.io/projected/a288572d-d385-4a03-88d6-d0e4e120f062-kube-api-access-trvj2\") pod \"dnsmasq-dns-666b6646f7-2zvv2\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.753270 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.759253 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cqnkj"] Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.787406 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tzdz5"] Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.788611 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.801909 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tzdz5"] Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.895397 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-config\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.895877 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtzq8\" (UniqueName: \"kubernetes.io/projected/bcca3048-157c-4d9d-9958-851e67a08b81-kube-api-access-mtzq8\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.895984 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.997657 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtzq8\" (UniqueName: \"kubernetes.io/projected/bcca3048-157c-4d9d-9958-851e67a08b81-kube-api-access-mtzq8\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.997779 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.997804 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-config\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.998752 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-config\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:29 crc kubenswrapper[4748]: I0216 15:11:29.999521 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.022132 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtzq8\" (UniqueName: \"kubernetes.io/projected/bcca3048-157c-4d9d-9958-851e67a08b81-kube-api-access-mtzq8\") pod \"dnsmasq-dns-57d769cc4f-tzdz5\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.177253 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.326072 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2zvv2"] Feb 16 15:11:30 crc kubenswrapper[4748]: W0216 15:11:30.387599 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda288572d_d385_4a03_88d6_d0e4e120f062.slice/crio-e0b0e717ac89a0a5608a9a7de92be16680126dd2d69f22afea43c3c7e8a9ed35 WatchSource:0}: Error finding container e0b0e717ac89a0a5608a9a7de92be16680126dd2d69f22afea43c3c7e8a9ed35: Status 404 returned error can't find the container with id e0b0e717ac89a0a5608a9a7de92be16680126dd2d69f22afea43c3c7e8a9ed35 Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.582279 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.586184 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.589161 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-p6ghv" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.589351 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.589592 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.590911 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.591323 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.596274 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.596374 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.616125 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.644971 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" event={"ID":"a288572d-d385-4a03-88d6-d0e4e120f062","Type":"ContainerStarted","Data":"e0b0e717ac89a0a5608a9a7de92be16680126dd2d69f22afea43c3c7e8a9ed35"} Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.710468 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.710537 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qblm\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-kube-api-access-8qblm\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.710655 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da81a287-d981-4b30-8d23-70cbc085368e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.710680 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.710903 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.711012 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.711057 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.711090 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.711121 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-config-data\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.711141 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.711161 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da81a287-d981-4b30-8d23-70cbc085368e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.724920 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tzdz5"] Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813025 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-config-data\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813088 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813114 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da81a287-d981-4b30-8d23-70cbc085368e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813157 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813179 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qblm\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-kube-api-access-8qblm\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813277 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da81a287-d981-4b30-8d23-70cbc085368e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813314 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813355 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813393 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813426 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.813456 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.815973 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-config-data\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.819040 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-server-conf\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.819244 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.820098 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.820800 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/da81a287-d981-4b30-8d23-70cbc085368e-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.825052 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.825110 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7c78b8c364ba747460f511e6031d3e58eb57a454cbb351e201167b5a0a7e52dd/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.828519 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/da81a287-d981-4b30-8d23-70cbc085368e-pod-info\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.831471 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.837009 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/da81a287-d981-4b30-8d23-70cbc085368e-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.837387 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.842477 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qblm\" (UniqueName: \"kubernetes.io/projected/da81a287-d981-4b30-8d23-70cbc085368e-kube-api-access-8qblm\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.857782 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f97b1174-9b2b-4172-b2a2-596720256d2a\") pod \"rabbitmq-server-0\" (UID: \"da81a287-d981-4b30-8d23-70cbc085368e\") " pod="openstack/rabbitmq-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.898978 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.901883 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.909699 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.910001 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.910205 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.910412 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9xhcp" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.910672 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.910855 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.911024 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.919483 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:11:30 crc kubenswrapper[4748]: I0216 15:11:30.953523 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018348 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018482 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018511 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018555 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b081f805-b462-406b-9d37-5aef68dd9edc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018583 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018620 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018652 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b081f805-b462-406b-9d37-5aef68dd9edc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018677 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018703 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfcgm\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-kube-api-access-qfcgm\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018813 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.018862 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121109 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121175 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121211 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b081f805-b462-406b-9d37-5aef68dd9edc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121243 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121300 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b081f805-b462-406b-9d37-5aef68dd9edc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121368 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121391 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfcgm\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-kube-api-access-qfcgm\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121428 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121492 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.121529 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.122826 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.122893 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.124386 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.124516 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.127072 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b081f805-b462-406b-9d37-5aef68dd9edc-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.135609 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b081f805-b462-406b-9d37-5aef68dd9edc-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.136017 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.136055 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e569cecc188e2caf7d6af90da7e6f3fef75e3e9e226ed414828600c34fb91bb8/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.136317 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b081f805-b462-406b-9d37-5aef68dd9edc-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.136519 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.137122 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.145488 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfcgm\" (UniqueName: \"kubernetes.io/projected/b081f805-b462-406b-9d37-5aef68dd9edc-kube-api-access-qfcgm\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.183174 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d83d59a3-6f2d-4fde-a338-2560466e7b52\") pod \"rabbitmq-cell1-server-0\" (UID: \"b081f805-b462-406b-9d37-5aef68dd9edc\") " pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:31 crc kubenswrapper[4748]: I0216 15:11:31.250606 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.072546 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.073976 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.077539 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-w4rtc" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.078032 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.078169 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.078341 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.081935 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.091521 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140280 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-kolla-config\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140495 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-operator-scripts\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140573 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140594 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/712fd752-5464-47c2-851e-b5b54a2cf335-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140782 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fd752-5464-47c2-851e-b5b54a2cf335-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140856 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghdv6\" (UniqueName: \"kubernetes.io/projected/712fd752-5464-47c2-851e-b5b54a2cf335-kube-api-access-ghdv6\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140941 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-config-data-default\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.140967 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/712fd752-5464-47c2-851e-b5b54a2cf335-config-data-generated\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.242927 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-config-data-default\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.242988 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/712fd752-5464-47c2-851e-b5b54a2cf335-config-data-generated\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.243027 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-kolla-config\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.243092 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-operator-scripts\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.243132 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.243154 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/712fd752-5464-47c2-851e-b5b54a2cf335-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.243206 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fd752-5464-47c2-851e-b5b54a2cf335-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.243279 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghdv6\" (UniqueName: \"kubernetes.io/projected/712fd752-5464-47c2-851e-b5b54a2cf335-kube-api-access-ghdv6\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.243720 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/712fd752-5464-47c2-851e-b5b54a2cf335-config-data-generated\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.244119 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-config-data-default\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.244448 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-kolla-config\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.245485 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/712fd752-5464-47c2-851e-b5b54a2cf335-operator-scripts\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.247378 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.247433 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0cd9b02b058bb5841092cbf286c9386da80d3e039efd0ecfc41182fcad81a8a6/globalmount\"" pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.258009 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/712fd752-5464-47c2-851e-b5b54a2cf335-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.259724 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/712fd752-5464-47c2-851e-b5b54a2cf335-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.262377 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghdv6\" (UniqueName: \"kubernetes.io/projected/712fd752-5464-47c2-851e-b5b54a2cf335-kube-api-access-ghdv6\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.304149 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b327c21b-2e4d-4513-9f17-32cc84051d50\") pod \"openstack-galera-0\" (UID: \"712fd752-5464-47c2-851e-b5b54a2cf335\") " pod="openstack/openstack-galera-0" Feb 16 15:11:32 crc kubenswrapper[4748]: I0216 15:11:32.403062 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.434950 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.437025 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.443550 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.444120 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.444176 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.444394 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-772q5" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.447238 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.564988 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.565825 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.565883 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.565924 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp2jw\" (UniqueName: \"kubernetes.io/projected/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-kube-api-access-cp2jw\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.565983 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.566037 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.566086 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.566132 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668296 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp2jw\" (UniqueName: \"kubernetes.io/projected/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-kube-api-access-cp2jw\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668401 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668449 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668483 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668516 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668558 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668586 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.668631 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.669554 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.669829 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.669875 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.671055 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.672585 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.672622 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/938157dc982d9467cd346b41dad9f1bb4332dce932fa41aa73b7a7ed2873beb8/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.673594 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.683554 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.690026 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp2jw\" (UniqueName: \"kubernetes.io/projected/ccc28d79-7cdc-4fac-95bb-2f041b1f25f1-kube-api-access-cp2jw\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.709782 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-30b15b5c-fb1f-43df-a67a-ede6a0a7e5bc\") pod \"openstack-cell1-galera-0\" (UID: \"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1\") " pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.768142 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.782539 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.783508 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.789025 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.789281 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.789442 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5sb8q" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.808104 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.871936 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bef88b-c878-46e0-960b-f77594421c27-combined-ca-bundle\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.872031 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/23bef88b-c878-46e0-960b-f77594421c27-memcached-tls-certs\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.872072 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23bef88b-c878-46e0-960b-f77594421c27-kolla-config\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.872093 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23bef88b-c878-46e0-960b-f77594421c27-config-data\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.872117 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml72w\" (UniqueName: \"kubernetes.io/projected/23bef88b-c878-46e0-960b-f77594421c27-kube-api-access-ml72w\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.973981 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23bef88b-c878-46e0-960b-f77594421c27-config-data\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.974060 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml72w\" (UniqueName: \"kubernetes.io/projected/23bef88b-c878-46e0-960b-f77594421c27-kube-api-access-ml72w\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.974135 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bef88b-c878-46e0-960b-f77594421c27-combined-ca-bundle\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.974206 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/23bef88b-c878-46e0-960b-f77594421c27-memcached-tls-certs\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.974243 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23bef88b-c878-46e0-960b-f77594421c27-kolla-config\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.975060 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/23bef88b-c878-46e0-960b-f77594421c27-config-data\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.975234 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23bef88b-c878-46e0-960b-f77594421c27-kolla-config\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.980506 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23bef88b-c878-46e0-960b-f77594421c27-combined-ca-bundle\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.982630 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/23bef88b-c878-46e0-960b-f77594421c27-memcached-tls-certs\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:33 crc kubenswrapper[4748]: I0216 15:11:33.994788 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml72w\" (UniqueName: \"kubernetes.io/projected/23bef88b-c878-46e0-960b-f77594421c27-kube-api-access-ml72w\") pod \"memcached-0\" (UID: \"23bef88b-c878-46e0-960b-f77594421c27\") " pod="openstack/memcached-0" Feb 16 15:11:34 crc kubenswrapper[4748]: I0216 15:11:34.111260 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 16 15:11:34 crc kubenswrapper[4748]: I0216 15:11:34.685223 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" event={"ID":"bcca3048-157c-4d9d-9958-851e67a08b81","Type":"ContainerStarted","Data":"797a36112f759ad9d0519051f330c4bb4787a05f500ec02e7118bf726c8e7eee"} Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.133583 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.135620 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.146195 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-hkp79" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.157923 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.218124 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cn79\" (UniqueName: \"kubernetes.io/projected/c06fd3c2-2bb4-40e8-8911-4f30daf28f43-kube-api-access-9cn79\") pod \"kube-state-metrics-0\" (UID: \"c06fd3c2-2bb4-40e8-8911-4f30daf28f43\") " pod="openstack/kube-state-metrics-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.319645 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cn79\" (UniqueName: \"kubernetes.io/projected/c06fd3c2-2bb4-40e8-8911-4f30daf28f43-kube-api-access-9cn79\") pod \"kube-state-metrics-0\" (UID: \"c06fd3c2-2bb4-40e8-8911-4f30daf28f43\") " pod="openstack/kube-state-metrics-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.342983 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cn79\" (UniqueName: \"kubernetes.io/projected/c06fd3c2-2bb4-40e8-8911-4f30daf28f43-kube-api-access-9cn79\") pod \"kube-state-metrics-0\" (UID: \"c06fd3c2-2bb4-40e8-8911-4f30daf28f43\") " pod="openstack/kube-state-metrics-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.457934 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.713586 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.716086 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.726643 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-6g7lf" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.726875 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.726990 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.727097 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.727305 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.736075 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.829197 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.829277 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.829421 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.829482 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.829530 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.829562 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.829596 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh577\" (UniqueName: \"kubernetes.io/projected/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-kube-api-access-fh577\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.931563 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.931665 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.931737 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.931781 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.931816 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh577\" (UniqueName: \"kubernetes.io/projected/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-kube-api-access-fh577\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.931858 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.931887 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.932403 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.937017 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.938885 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.938794 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.940404 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.947845 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:36 crc kubenswrapper[4748]: I0216 15:11:36.951864 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh577\" (UniqueName: \"kubernetes.io/projected/ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c-kube-api-access-fh577\") pod \"alertmanager-metric-storage-0\" (UID: \"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c\") " pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.084330 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.274308 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.276475 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.278575 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.279440 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.279545 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.279594 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.279751 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.279851 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-r4jkk" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.279545 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.279863 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.299296 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.338495 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.338601 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.338629 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.338666 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.338849 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.338892 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.340410 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.340463 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.340487 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkwcz\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-kube-api-access-lkwcz\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.340571 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.442982 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443039 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443073 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkwcz\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-kube-api-access-lkwcz\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443128 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443151 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443178 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443204 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443230 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443279 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.443322 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.445377 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.446061 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.447561 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.447689 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.448641 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.448669 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/36679523bf6c4e41657f99198418f26740d0c20952506ea2d93db24b45c1d1d0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.449102 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.451680 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.454611 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.464767 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkwcz\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-kube-api-access-lkwcz\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.465481 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.490105 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:37 crc kubenswrapper[4748]: I0216 15:11:37.601886 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.885419 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.888229 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.894439 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.894981 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.909046 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-lm42w" Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.909548 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.909832 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 16 15:11:39 crc kubenswrapper[4748]: I0216 15:11:39.910645 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.014086 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.014296 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-config\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.014358 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.014471 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.014858 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7pd4\" (UniqueName: \"kubernetes.io/projected/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-kube-api-access-s7pd4\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.015034 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.015238 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.015330 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120556 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120662 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120697 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120741 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120781 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-config\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120833 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120859 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.120934 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7pd4\" (UniqueName: \"kubernetes.io/projected/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-kube-api-access-s7pd4\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.124094 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.125251 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-config\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.127553 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.128481 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.128513 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/aae75241cdb98276bc0f505d2a2f947564a0e6252431dce202e23c7908f80e8b/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.132258 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.135443 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.141573 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.145539 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7pd4\" (UniqueName: \"kubernetes.io/projected/252aec5a-72dd-4699-b9b8-72dc1c8bd1a8-kube-api-access-s7pd4\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.195422 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d449548e-b8fa-447b-a8d3-cb022ac0edc9\") pod \"ovsdbserver-nb-0\" (UID: \"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8\") " pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.235821 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8q6bn"] Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.238421 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.241555 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-j6599" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.241789 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.242422 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.245431 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8q6bn"] Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.246234 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.253231 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rhspv"] Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.257135 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.261267 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rhspv"] Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.323763 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-combined-ca-bundle\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.323824 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-log\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.323877 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-log-ovn\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.324022 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-run\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.324983 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-run\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325197 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8llxv\" (UniqueName: \"kubernetes.io/projected/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-kube-api-access-8llxv\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325254 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-ovn-controller-tls-certs\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325319 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-scripts\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325515 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-run-ovn\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325647 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgw8c\" (UniqueName: \"kubernetes.io/projected/62617783-e02a-4d59-b7a1-36206106585b-kube-api-access-kgw8c\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325753 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-lib\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325816 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-etc-ovs\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.325844 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62617783-e02a-4d59-b7a1-36206106585b-scripts\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.429661 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8llxv\" (UniqueName: \"kubernetes.io/projected/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-kube-api-access-8llxv\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.429817 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-ovn-controller-tls-certs\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.430804 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-scripts\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.430866 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-run-ovn\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.430955 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgw8c\" (UniqueName: \"kubernetes.io/projected/62617783-e02a-4d59-b7a1-36206106585b-kube-api-access-kgw8c\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431080 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-lib\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431127 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-etc-ovs\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431151 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62617783-e02a-4d59-b7a1-36206106585b-scripts\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431198 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-combined-ca-bundle\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431238 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-log\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431304 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-log-ovn\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431337 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-run\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.431464 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-run\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.432235 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-log\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.432256 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-run-ovn\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.432308 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-log-ovn\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.432379 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-run\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.432396 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-var-lib\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.432544 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/62617783-e02a-4d59-b7a1-36206106585b-etc-ovs\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.432674 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-var-run\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.433702 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-scripts\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.435837 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-ovn-controller-tls-certs\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.440174 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62617783-e02a-4d59-b7a1-36206106585b-scripts\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.450373 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-combined-ca-bundle\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.451182 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8llxv\" (UniqueName: \"kubernetes.io/projected/5282d6ba-c0a4-4ada-9ffb-d233444b10f1-kube-api-access-8llxv\") pod \"ovn-controller-8q6bn\" (UID: \"5282d6ba-c0a4-4ada-9ffb-d233444b10f1\") " pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.451579 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgw8c\" (UniqueName: \"kubernetes.io/projected/62617783-e02a-4d59-b7a1-36206106585b-kube-api-access-kgw8c\") pod \"ovn-controller-ovs-rhspv\" (UID: \"62617783-e02a-4d59-b7a1-36206106585b\") " pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.577579 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8q6bn" Feb 16 15:11:40 crc kubenswrapper[4748]: I0216 15:11:40.592197 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.633327 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.635104 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.637515 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.637615 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-f5zq4" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.638159 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.640512 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.656675 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732341 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e891de95-67f1-4cdd-8913-747978f44a1e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732408 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p59xq\" (UniqueName: \"kubernetes.io/projected/e891de95-67f1-4cdd-8913-747978f44a1e-kube-api-access-p59xq\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732441 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e891de95-67f1-4cdd-8913-747978f44a1e-config\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732539 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732571 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732737 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e891de95-67f1-4cdd-8913-747978f44a1e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732855 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.732968 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835113 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e891de95-67f1-4cdd-8913-747978f44a1e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835200 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835250 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835292 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e891de95-67f1-4cdd-8913-747978f44a1e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835356 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p59xq\" (UniqueName: \"kubernetes.io/projected/e891de95-67f1-4cdd-8913-747978f44a1e-kube-api-access-p59xq\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835385 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e891de95-67f1-4cdd-8913-747978f44a1e-config\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835425 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.835450 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.836091 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e891de95-67f1-4cdd-8913-747978f44a1e-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.837269 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e891de95-67f1-4cdd-8913-747978f44a1e-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.837545 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e891de95-67f1-4cdd-8913-747978f44a1e-config\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.838488 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.838688 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9985cc30d1013bfa980f4a7ce9e8ce5d42d0a81f2dbed1309202c4c71bb1f036/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.841848 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.842343 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.854165 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e891de95-67f1-4cdd-8913-747978f44a1e-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.857223 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p59xq\" (UniqueName: \"kubernetes.io/projected/e891de95-67f1-4cdd-8913-747978f44a1e-kube-api-access-p59xq\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.888824 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-da2454b6-447b-43ea-8904-c6b07d59d4aa\") pod \"ovsdbserver-sb-0\" (UID: \"e891de95-67f1-4cdd-8913-747978f44a1e\") " pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:44 crc kubenswrapper[4748]: I0216 15:11:44.959415 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.663524 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b"] Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.666118 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.670285 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-b8kl9" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.670501 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.670653 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.670977 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.688421 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.698774 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b"] Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.792044 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.792115 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4gr\" (UniqueName: \"kubernetes.io/projected/c624e6e8-c1e8-433c-ad0f-603109d8fa32-kube-api-access-bw4gr\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.792165 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.792246 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.792338 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c624e6e8-c1e8-433c-ad0f-603109d8fa32-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.847660 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s"] Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.861581 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.864022 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.864383 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.864499 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.883265 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s"] Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.894949 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c624e6e8-c1e8-433c-ad0f-603109d8fa32-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895013 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895044 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw4gr\" (UniqueName: \"kubernetes.io/projected/c624e6e8-c1e8-433c-ad0f-603109d8fa32-kube-api-access-bw4gr\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895079 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895103 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895123 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v575v\" (UniqueName: \"kubernetes.io/projected/8748ce40-6f4e-417f-919b-5ce0b40ebf43-kube-api-access-v575v\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895164 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895187 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895206 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8748ce40-6f4e-417f-919b-5ce0b40ebf43-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895228 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.895253 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.896191 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c624e6e8-c1e8-433c-ad0f-603109d8fa32-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.901179 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.902099 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.905338 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/c624e6e8-c1e8-433c-ad0f-603109d8fa32-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.954095 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr"] Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.955295 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.968464 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.968657 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.989534 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw4gr\" (UniqueName: \"kubernetes.io/projected/c624e6e8-c1e8-433c-ad0f-603109d8fa32-kube-api-access-bw4gr\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-vwd9b\" (UID: \"c624e6e8-c1e8-433c-ad0f-603109d8fa32\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.996882 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.999797 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v575v\" (UniqueName: \"kubernetes.io/projected/8748ce40-6f4e-417f-919b-5ce0b40ebf43-kube-api-access-v575v\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.999861 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gbx7\" (UniqueName: \"kubernetes.io/projected/fcc01f6d-7536-43f6-bd86-a6eea7443783-kube-api-access-7gbx7\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:47 crc kubenswrapper[4748]: I0216 15:11:47.999927 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc01f6d-7536-43f6-bd86-a6eea7443783-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.000071 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.000110 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8748ce40-6f4e-417f-919b-5ce0b40ebf43-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.000180 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.000245 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.000508 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.000731 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.001022 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:47.998847 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.001563 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.002613 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.003097 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8748ce40-6f4e-417f-919b-5ce0b40ebf43-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.004774 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.009926 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.015330 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/8748ce40-6f4e-417f-919b-5ce0b40ebf43-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.042702 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v575v\" (UniqueName: \"kubernetes.io/projected/8748ce40-6f4e-417f-919b-5ce0b40ebf43-kube-api-access-v575v\") pod \"cloudkitty-lokistack-querier-58c84b5844-7lv6s\" (UID: \"8748ce40-6f4e-417f-919b-5ce0b40ebf43\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.107626 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.107695 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.107778 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.107827 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gbx7\" (UniqueName: \"kubernetes.io/projected/fcc01f6d-7536-43f6-bd86-a6eea7443783-kube-api-access-7gbx7\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.107855 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc01f6d-7536-43f6-bd86-a6eea7443783-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.109114 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcc01f6d-7536-43f6-bd86-a6eea7443783-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.111445 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.117257 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.117427 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/fcc01f6d-7536-43f6-bd86-a6eea7443783-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.131197 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.133393 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.137909 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.138214 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.138409 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.138411 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.138236 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.138797 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.157904 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gbx7\" (UniqueName: \"kubernetes.io/projected/fcc01f6d-7536-43f6-bd86-a6eea7443783-kube-api-access-7gbx7\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr\" (UID: \"fcc01f6d-7536-43f6-bd86-a6eea7443783\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.162868 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.174393 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.176224 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.183776 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.186191 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-6dvjf" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.195284 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211184 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211231 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211293 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211416 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211482 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211516 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211545 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7spfg\" (UniqueName: \"kubernetes.io/projected/48850a43-a766-44bc-9426-b56c91be16d1-kube-api-access-7spfg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211577 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.211669 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.313739 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.313816 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.313875 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.313905 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.313943 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.313969 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314015 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314043 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314083 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314114 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314265 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314420 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5spkn\" (UniqueName: \"kubernetes.io/projected/eded30d6-cdfa-48c2-b298-28242bb952d1-kube-api-access-5spkn\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314471 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314509 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7spfg\" (UniqueName: \"kubernetes.io/projected/48850a43-a766-44bc-9426-b56c91be16d1-kube-api-access-7spfg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314561 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314592 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.314853 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.315149 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.315320 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.315596 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.316155 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.316408 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.317397 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.319012 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.319222 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.319580 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/48850a43-a766-44bc-9426-b56c91be16d1-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.336405 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7spfg\" (UniqueName: \"kubernetes.io/projected/48850a43-a766-44bc-9426-b56c91be16d1-kube-api-access-7spfg\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-vxq6m\" (UID: \"48850a43-a766-44bc-9426-b56c91be16d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.370075 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420139 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5spkn\" (UniqueName: \"kubernetes.io/projected/eded30d6-cdfa-48c2-b298-28242bb952d1-kube-api-access-5spkn\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420218 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420240 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420326 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420356 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420395 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420433 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420452 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.420482 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.423212 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.423463 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.423646 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.423640 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.423799 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.425458 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.426446 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.428180 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/eded30d6-cdfa-48c2-b298-28242bb952d1-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.440892 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5spkn\" (UniqueName: \"kubernetes.io/projected/eded30d6-cdfa-48c2-b298-28242bb952d1-kube-api-access-5spkn\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-wj7t8\" (UID: \"eded30d6-cdfa-48c2-b298-28242bb952d1\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.464358 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.509995 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.814258 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.815556 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.819585 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.827914 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.831117 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.932133 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.933761 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938593 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938657 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938739 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938774 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938793 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqwhk\" (UniqueName: \"kubernetes.io/projected/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-kube-api-access-gqwhk\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938822 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938854 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.938874 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.943237 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.943517 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 16 15:11:48 crc kubenswrapper[4748]: I0216 15:11:48.949289 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042228 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042294 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042337 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042366 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042396 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqwhk\" (UniqueName: \"kubernetes.io/projected/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-kube-api-access-gqwhk\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042448 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042503 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042529 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042568 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042592 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8jbt\" (UniqueName: \"kubernetes.io/projected/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-kube-api-access-d8jbt\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042705 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042755 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042799 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042835 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.042866 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.043628 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.044147 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.044341 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.044980 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.047851 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.048588 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.052313 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.058562 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.060659 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.063547 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.064935 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.067843 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.069796 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqwhk\" (UniqueName: \"kubernetes.io/projected/ecd4cdcd-6dc0-4bba-980e-019d6eae5251-kube-api-access-gqwhk\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.085082 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.118708 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"ecd4cdcd-6dc0-4bba-980e-019d6eae5251\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.144829 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.144917 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.144976 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145022 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdwqc\" (UniqueName: \"kubernetes.io/projected/54cc1946-a258-47ba-9460-d27cae5b2b9f-kube-api-access-qdwqc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145062 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145080 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8jbt\" (UniqueName: \"kubernetes.io/projected/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-kube-api-access-d8jbt\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145113 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145131 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145171 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145194 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cc1946-a258-47ba-9460-d27cae5b2b9f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145217 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145243 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145270 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.145304 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.146447 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.150024 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.150922 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.151840 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.152865 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.156051 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.157778 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.173645 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8jbt\" (UniqueName: \"kubernetes.io/projected/42d40dab-aebb-44fa-ac5a-9100d1b1fb48-kube-api-access-d8jbt\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.174406 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"42d40dab-aebb-44fa-ac5a-9100d1b1fb48\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.246927 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.247013 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdwqc\" (UniqueName: \"kubernetes.io/projected/54cc1946-a258-47ba-9460-d27cae5b2b9f-kube-api-access-qdwqc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.247107 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.247137 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.247224 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cc1946-a258-47ba-9460-d27cae5b2b9f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.247274 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.247363 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.247677 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.249419 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.249619 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54cc1946-a258-47ba-9460-d27cae5b2b9f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.255667 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.255794 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.255935 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/54cc1946-a258-47ba-9460-d27cae5b2b9f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.266335 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdwqc\" (UniqueName: \"kubernetes.io/projected/54cc1946-a258-47ba-9460-d27cae5b2b9f-kube-api-access-qdwqc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.269206 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.275145 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"54cc1946-a258-47ba-9460-d27cae5b2b9f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.365369 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.365771 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9wbln,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-cqnkj_openstack(bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.367445 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" podUID="bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.378365 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.378664 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-trvj2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-2zvv2_openstack(a288572d-d385-4a03-88d6-d0e4e120f062): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.385194 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.413133 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.413394 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xvt7g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-r2qr5_openstack(ebe04a4d-fd1f-469e-ac5b-2fa626aeb504): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.414003 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.415002 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" podUID="ebe04a4d-fd1f-469e-ac5b-2fa626aeb504" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.417175 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mtzq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-tzdz5_openstack(bcca3048-157c-4d9d-9958-851e67a08b81): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.418561 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.488667 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.859699 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" Feb 16 15:11:49 crc kubenswrapper[4748]: E0216 15:11:49.861386 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" Feb 16 15:11:49 crc kubenswrapper[4748]: I0216 15:11:49.935572 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8q6bn"] Feb 16 15:11:50 crc kubenswrapper[4748]: I0216 15:11:50.732819 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 16 15:11:50 crc kubenswrapper[4748]: W0216 15:11:50.736847 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb081f805_b462_406b_9d37_5aef68dd9edc.slice/crio-8f7e6b36d8145922b5392b910eb3428bbfa376e53f0abdf46b17bc629b8db657 WatchSource:0}: Error finding container 8f7e6b36d8145922b5392b910eb3428bbfa376e53f0abdf46b17bc629b8db657: Status 404 returned error can't find the container with id 8f7e6b36d8145922b5392b910eb3428bbfa376e53f0abdf46b17bc629b8db657 Feb 16 15:11:50 crc kubenswrapper[4748]: I0216 15:11:50.783132 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 16 15:11:50 crc kubenswrapper[4748]: I0216 15:11:50.807872 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 16 15:11:50 crc kubenswrapper[4748]: I0216 15:11:50.864835 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b081f805-b462-406b-9d37-5aef68dd9edc","Type":"ContainerStarted","Data":"8f7e6b36d8145922b5392b910eb3428bbfa376e53f0abdf46b17bc629b8db657"} Feb 16 15:11:50 crc kubenswrapper[4748]: I0216 15:11:50.866839 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8q6bn" event={"ID":"5282d6ba-c0a4-4ada-9ffb-d233444b10f1","Type":"ContainerStarted","Data":"06baf04486b3e5d349cfacfb15855db3b3bfdf5a999808d40edb8b2a4648e381"} Feb 16 15:11:50 crc kubenswrapper[4748]: I0216 15:11:50.875171 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da81a287-d981-4b30-8d23-70cbc085368e","Type":"ContainerStarted","Data":"4b0f4dfb8ab0c72a8b6beb56c24a475766d2dcaea813a013b662b9d053db603a"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.033938 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.040768 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.141368 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wbln\" (UniqueName: \"kubernetes.io/projected/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-kube-api-access-9wbln\") pod \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.147445 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-dns-svc\") pod \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.147552 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-config\") pod \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.147589 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-config\") pod \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\" (UID: \"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6\") " Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.147629 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvt7g\" (UniqueName: \"kubernetes.io/projected/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-kube-api-access-xvt7g\") pod \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\" (UID: \"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504\") " Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.148089 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6" (UID: "bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.148619 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-config" (OuterVolumeSpecName: "config") pod "bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6" (UID: "bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.149476 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.149498 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.149709 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8"] Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.151815 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-config" (OuterVolumeSpecName: "config") pod "ebe04a4d-fd1f-469e-ac5b-2fa626aeb504" (UID: "ebe04a4d-fd1f-469e-ac5b-2fa626aeb504"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.154626 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-kube-api-access-xvt7g" (OuterVolumeSpecName: "kube-api-access-xvt7g") pod "ebe04a4d-fd1f-469e-ac5b-2fa626aeb504" (UID: "ebe04a4d-fd1f-469e-ac5b-2fa626aeb504"). InnerVolumeSpecName "kube-api-access-xvt7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.154851 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-kube-api-access-9wbln" (OuterVolumeSpecName: "kube-api-access-9wbln") pod "bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6" (UID: "bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6"). InnerVolumeSpecName "kube-api-access-9wbln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.161095 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: W0216 15:11:51.163859 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeded30d6_cdfa_48c2_b298_28242bb952d1.slice/crio-abfe24f1cdeafc68ef63a995d24e0c5410bc9929b534209eee01bdc13f13f538 WatchSource:0}: Error finding container abfe24f1cdeafc68ef63a995d24e0c5410bc9929b534209eee01bdc13f13f538: Status 404 returned error can't find the container with id abfe24f1cdeafc68ef63a995d24e0c5410bc9929b534209eee01bdc13f13f538 Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.168860 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.186603 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: W0216 15:11:51.192452 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc06fd3c2_2bb4_40e8_8911_4f30daf28f43.slice/crio-b14eddf0401524cb6eb65fcd35e485e527348f9d569e3a08c180b44a8b8b17e3 WatchSource:0}: Error finding container b14eddf0401524cb6eb65fcd35e485e527348f9d569e3a08c180b44a8b8b17e3: Status 404 returned error can't find the container with id b14eddf0401524cb6eb65fcd35e485e527348f9d569e3a08c180b44a8b8b17e3 Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.250212 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.250242 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvt7g\" (UniqueName: \"kubernetes.io/projected/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504-kube-api-access-xvt7g\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.250255 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wbln\" (UniqueName: \"kubernetes.io/projected/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6-kube-api-access-9wbln\") on node \"crc\" DevicePath \"\"" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.581336 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.625931 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: W0216 15:11:51.632433 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec8e9d13_fbe3_43e2_80bb_e2056ce6c92c.slice/crio-fae37f35eba30c662bec2e7373eb0c10592e44aded1365a519aa35f7a54548be WatchSource:0}: Error finding container fae37f35eba30c662bec2e7373eb0c10592e44aded1365a519aa35f7a54548be: Status 404 returned error can't find the container with id fae37f35eba30c662bec2e7373eb0c10592e44aded1365a519aa35f7a54548be Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.658808 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: W0216 15:11:51.669457 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8748ce40_6f4e_417f_919b_5ce0b40ebf43.slice/crio-5717e8f71c8d5779eb8625456a6c3ce4c7e2df06abc65fee3d92e442d6a53341 WatchSource:0}: Error finding container 5717e8f71c8d5779eb8625456a6c3ce4c7e2df06abc65fee3d92e442d6a53341: Status 404 returned error can't find the container with id 5717e8f71c8d5779eb8625456a6c3ce4c7e2df06abc65fee3d92e442d6a53341 Feb 16 15:11:51 crc kubenswrapper[4748]: W0216 15:11:51.673205 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc624e6e8_c1e8_433c_ad0f_603109d8fa32.slice/crio-1f0eeb0bb0f0b76db73fb91498bf6707d87737679f35add3c9806a4ffbc22b98 WatchSource:0}: Error finding container 1f0eeb0bb0f0b76db73fb91498bf6707d87737679f35add3c9806a4ffbc22b98: Status 404 returned error can't find the container with id 1f0eeb0bb0f0b76db73fb91498bf6707d87737679f35add3c9806a4ffbc22b98 Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.677453 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s"] Feb 16 15:11:51 crc kubenswrapper[4748]: W0216 15:11:51.678282 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54cc1946_a258_47ba_9460_d27cae5b2b9f.slice/crio-07dcdd01011eaa47f1748811883f73f96f9d3c79b987e8ee3800c28c428bbe2a WatchSource:0}: Error finding container 07dcdd01011eaa47f1748811883f73f96f9d3c79b987e8ee3800c28c428bbe2a: Status 404 returned error can't find the container with id 07dcdd01011eaa47f1748811883f73f96f9d3c79b987e8ee3800c28c428bbe2a Feb 16 15:11:51 crc kubenswrapper[4748]: W0216 15:11:51.686003 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42d40dab_aebb_44fa_ac5a_9100d1b1fb48.slice/crio-46fa9b1c59733c3e07c9a0784b09f6a77ccb4523db46af90281cec37854ba9ba WatchSource:0}: Error finding container 46fa9b1c59733c3e07c9a0784b09f6a77ccb4523db46af90281cec37854ba9ba: Status 404 returned error can't find the container with id 46fa9b1c59733c3e07c9a0784b09f6a77ccb4523db46af90281cec37854ba9ba Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.687984 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m"] Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.692961 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-compactor,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=compactor -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-compactor-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8jbt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-compactor-0_openstack(42d40dab-aebb-44fa-ac5a-9100d1b1fb48): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.694112 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="42d40dab-aebb-44fa-ac5a-9100d1b1fb48" Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.698296 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5hd5h67hdchbbhfdhfbh574hb6hbch59bh5c6h597h65fh55h68bh55bh65bh55h584h68bh9bhddh66h75h5c7h9dh554h589h99h58dhc5q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7pd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(252aec5a-72dd-4699-b9b8-72dc1c8bd1a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.700221 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n5hd5h67hdchbbhfdhfbh574hb6hbch59bh5c6h597h65fh55h68bh55bh65bh55h584h68bh9bhddh66h75h5c7h9dh554h589h99h58dhc5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s7pd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(252aec5a-72dd-4699-b9b8-72dc1c8bd1a8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.702175 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-nb-0" podUID="252aec5a-72dd-4699-b9b8-72dc1c8bd1a8" Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.703067 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:loki-ingester,Image:registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981,Command:[],Args:[-target=ingester -config.file=/etc/loki/config/config.yaml -runtime-config.file=/etc/loki/config/runtime-config.yaml -config.expand-env=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:3100,Protocol:TCP,HostIP:,},ContainerPort{Name:grpclb,HostPort:0,ContainerPort:9095,Protocol:TCP,HostIP:,},ContainerPort{Name:gossip-ring,HostPort:0,ContainerPort:7946,Protocol:TCP,HostIP:,},ContainerPort{Name:healthchecks,HostPort:0,ContainerPort:3101,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:AWS_ACCESS_KEY_ID,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_id,Optional:nil,},},},EnvVar{Name:AWS_ACCESS_KEY_SECRET,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:cloudkitty-loki-s3,},Key:access_key_secret,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/loki/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:storage,ReadOnly:false,MountPath:/tmp/loki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:wal,ReadOnly:false,MountPath:/tmp/wal,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-http,ReadOnly:false,MountPath:/var/run/tls/http/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-loki-s3,ReadOnly:false,MountPath:/etc/storage/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ingester-grpc,ReadOnly:false,MountPath:/var/run/tls/grpc/server,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cloudkitty-lokistack-ca-bundle,ReadOnly:false,MountPath:/var/run/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gqwhk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/loki/api/v1/status/buildinfo,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:2,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 3101 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-lokistack-ingester-0_openstack(ecd4cdcd-6dc0-4bba-980e-019d6eae5251): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.703566 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b"] Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.704380 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ecd4cdcd-6dc0-4bba-980e-019d6eae5251" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.721294 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr"] Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.731242 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59fh77h4h667h98h77hd9hb9h577h647h568hd8h566h649h58dh59dh57bh59bhc8h5d4h8bhc5h5fch567h69h57dh5f4h5b5h5ffh59dh65fh57fq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(e891de95-67f1-4cdd-8913-747978f44a1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.734520 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n59fh77h4h667h98h77hd9hb9h577h647h568hd8h566h649h58dh59dh57bh59bhc8h5d4h8bhc5h5fch567h69h57dh5f4h5b5h5ffh59dh65fh57fq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p59xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(e891de95-67f1-4cdd-8913-747978f44a1e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.735402 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.735755 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-sb-0" podUID="e891de95-67f1-4cdd-8913-747978f44a1e" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.747608 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.762582 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.778519 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.891769 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" event={"ID":"fcc01f6d-7536-43f6-bd86-a6eea7443783","Type":"ContainerStarted","Data":"0d788e3edf10ee8146ecbb20f45acb0cc362ea9a3e0c70c2e4935d833439dad2"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.893940 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e891de95-67f1-4cdd-8913-747978f44a1e","Type":"ContainerStarted","Data":"6df7bfec92fbe7790b53822c86e76cb8925acdf3aee29f1edd6286c8b4885fb3"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.897330 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"712fd752-5464-47c2-851e-b5b54a2cf335","Type":"ContainerStarted","Data":"6707a1ffc5e2760b8888fb2a6c2f0e0d140a32389dc96324f88fe696089a152b"} Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.900567 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="e891de95-67f1-4cdd-8913-747978f44a1e" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.902840 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c06fd3c2-2bb4-40e8-8911-4f30daf28f43","Type":"ContainerStarted","Data":"b14eddf0401524cb6eb65fcd35e485e527348f9d569e3a08c180b44a8b8b17e3"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.905005 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1","Type":"ContainerStarted","Data":"b2d34e5d9b9bd5f85e221b423c1da2c74d5aa1d5ca186725f9e0c739d08dd07a"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.906340 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" event={"ID":"bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6","Type":"ContainerDied","Data":"f83aac058e625dbb0aedf792a2db17c3ff356ae4df0b4e6594aa7c2a89fb4e71"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.906362 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-cqnkj" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.908479 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"42d40dab-aebb-44fa-ac5a-9100d1b1fb48","Type":"ContainerStarted","Data":"46fa9b1c59733c3e07c9a0784b09f6a77ccb4523db46af90281cec37854ba9ba"} Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.910352 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="42d40dab-aebb-44fa-ac5a-9100d1b1fb48" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.912746 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerStarted","Data":"49e08f28ba315f5d6092adbc8dabda4c13901de2cff76c0f3980c7a8c87d8842"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.920806 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" event={"ID":"c624e6e8-c1e8-433c-ad0f-603109d8fa32","Type":"ContainerStarted","Data":"1f0eeb0bb0f0b76db73fb91498bf6707d87737679f35add3c9806a4ffbc22b98"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.923005 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" event={"ID":"ebe04a4d-fd1f-469e-ac5b-2fa626aeb504","Type":"ContainerDied","Data":"31f3696133127ed2edb4a03a56f3d27c45c1a3e69ff846a5f6a5258f2bfff96f"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.923086 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-r2qr5" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.928086 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c","Type":"ContainerStarted","Data":"fae37f35eba30c662bec2e7373eb0c10592e44aded1365a519aa35f7a54548be"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.930003 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" event={"ID":"8748ce40-6f4e-417f-919b-5ce0b40ebf43","Type":"ContainerStarted","Data":"5717e8f71c8d5779eb8625456a6c3ce4c7e2df06abc65fee3d92e442d6a53341"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.933183 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" event={"ID":"eded30d6-cdfa-48c2-b298-28242bb952d1","Type":"ContainerStarted","Data":"abfe24f1cdeafc68ef63a995d24e0c5410bc9929b534209eee01bdc13f13f538"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.934882 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"23bef88b-c878-46e0-960b-f77594421c27","Type":"ContainerStarted","Data":"66607fd982173994a3429a7bf82a823a943ec185361517291f50b95e7f910b59"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.936754 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" event={"ID":"48850a43-a766-44bc-9426-b56c91be16d1","Type":"ContainerStarted","Data":"20caa1ada842a03a080e4f7e50330be8f5aed7c28ed8d2fdfe22152d2527d45b"} Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.940701 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"ecd4cdcd-6dc0-4bba-980e-019d6eae5251","Type":"ContainerStarted","Data":"b549305ec6748d14dff99d6ede93c8dfc4e32b8f53696bc0707e64949af55fee"} Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.944634 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ecd4cdcd-6dc0-4bba-980e-019d6eae5251" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.956086 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8","Type":"ContainerStarted","Data":"93793cc1a1c5d128774ea482f1560422be42ee9eea11714db1bce7daa717be10"} Feb 16 15:11:51 crc kubenswrapper[4748]: E0216 15:11:51.959816 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="252aec5a-72dd-4699-b9b8-72dc1c8bd1a8" Feb 16 15:11:51 crc kubenswrapper[4748]: I0216 15:11:51.966240 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"54cc1946-a258-47ba-9460-d27cae5b2b9f","Type":"ContainerStarted","Data":"07dcdd01011eaa47f1748811883f73f96f9d3c79b987e8ee3800c28c428bbe2a"} Feb 16 15:11:52 crc kubenswrapper[4748]: I0216 15:11:52.109919 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cqnkj"] Feb 16 15:11:52 crc kubenswrapper[4748]: I0216 15:11:52.127132 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-cqnkj"] Feb 16 15:11:52 crc kubenswrapper[4748]: I0216 15:11:52.143079 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r2qr5"] Feb 16 15:11:52 crc kubenswrapper[4748]: E0216 15:11:52.148241 4748 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebe04a4d_fd1f_469e_ac5b_2fa626aeb504.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebe04a4d_fd1f_469e_ac5b_2fa626aeb504.slice/crio-31f3696133127ed2edb4a03a56f3d27c45c1a3e69ff846a5f6a5258f2bfff96f\": RecentStats: unable to find data in memory cache]" Feb 16 15:11:52 crc kubenswrapper[4748]: I0216 15:11:52.158218 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-r2qr5"] Feb 16 15:11:52 crc kubenswrapper[4748]: I0216 15:11:52.424090 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rhspv"] Feb 16 15:11:52 crc kubenswrapper[4748]: E0216 15:11:52.980153 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-ingester\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ecd4cdcd-6dc0-4bba-980e-019d6eae5251" Feb 16 15:11:52 crc kubenswrapper[4748]: E0216 15:11:52.980329 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"loki-compactor\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift-logging/logging-loki-rhel9@sha256:2988df223331c4653649c064d533a3f2b23aa5b11711ea8aede7338146b69981\\\"\"" pod="openstack/cloudkitty-lokistack-compactor-0" podUID="42d40dab-aebb-44fa-ac5a-9100d1b1fb48" Feb 16 15:11:52 crc kubenswrapper[4748]: E0216 15:11:52.980667 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-sb-0" podUID="e891de95-67f1-4cdd-8913-747978f44a1e" Feb 16 15:11:52 crc kubenswrapper[4748]: E0216 15:11:52.981891 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="252aec5a-72dd-4699-b9b8-72dc1c8bd1a8" Feb 16 15:11:53 crc kubenswrapper[4748]: I0216 15:11:53.034886 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6" path="/var/lib/kubelet/pods/bb3dc102-6f1a-4c76-bbde-de2e2da2c9d6/volumes" Feb 16 15:11:53 crc kubenswrapper[4748]: I0216 15:11:53.035330 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebe04a4d-fd1f-469e-ac5b-2fa626aeb504" path="/var/lib/kubelet/pods/ebe04a4d-fd1f-469e-ac5b-2fa626aeb504/volumes" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.531865 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwfx"] Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.535481 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.551622 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwfx"] Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.582093 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q44j4\" (UniqueName: \"kubernetes.io/projected/062968b6-bfb6-4065-84b8-63521623319e-kube-api-access-q44j4\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.582373 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-catalog-content\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.582453 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-utilities\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.683612 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-catalog-content\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.683683 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-utilities\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.683766 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q44j4\" (UniqueName: \"kubernetes.io/projected/062968b6-bfb6-4065-84b8-63521623319e-kube-api-access-q44j4\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.684429 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-catalog-content\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.684487 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-utilities\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.705233 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q44j4\" (UniqueName: \"kubernetes.io/projected/062968b6-bfb6-4065-84b8-63521623319e-kube-api-access-q44j4\") pod \"redhat-marketplace-lbwfx\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:01 crc kubenswrapper[4748]: I0216 15:12:01.871129 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:02 crc kubenswrapper[4748]: I0216 15:12:02.072360 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rhspv" event={"ID":"62617783-e02a-4d59-b7a1-36206106585b","Type":"ContainerStarted","Data":"08c57b845f5742e7b4a272f5d7dc5b93dfe951ec9e6596e65b5ce2f338ac973a"} Feb 16 15:12:03 crc kubenswrapper[4748]: E0216 15:12:03.755094 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 16 15:12:03 crc kubenswrapper[4748]: E0216 15:12:03.755594 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qfcgm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(b081f805-b462-406b-9d37-5aef68dd9edc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:03 crc kubenswrapper[4748]: E0216 15:12:03.756802 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b081f805-b462-406b-9d37-5aef68dd9edc" Feb 16 15:12:04 crc kubenswrapper[4748]: E0216 15:12:04.099841 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="b081f805-b462-406b-9d37-5aef68dd9edc" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.072916 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.073177 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cp2jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(ccc28d79-7cdc-4fac-95bb-2f041b1f25f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.074599 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="ccc28d79-7cdc-4fac-95bb-2f041b1f25f1" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.093478 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.093718 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ghdv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(712fd752-5464-47c2-851e-b5b54a2cf335): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.094939 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="712fd752-5464-47c2-851e-b5b54a2cf335" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.126502 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="712fd752-5464-47c2-851e-b5b54a2cf335" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.129303 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="ccc28d79-7cdc-4fac-95bb-2f041b1f25f1" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.389024 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.389435 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58bh5cch57fh67ch5d9h5f7h559h5dch69hbfh66fh659h598h65dh8chcdh64bh596h557h7ch5ch76h9fh588h7ch55h546h5f6hdfh66dh55ch65dq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8llxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-8q6bn_openstack(5282d6ba-c0a4-4ada-9ffb-d233444b10f1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:12:07 crc kubenswrapper[4748]: E0216 15:12:07.390698 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-8q6bn" podUID="5282d6ba-c0a4-4ada-9ffb-d233444b10f1" Feb 16 15:12:08 crc kubenswrapper[4748]: E0216 15:12:08.135166 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-8q6bn" podUID="5282d6ba-c0a4-4ada-9ffb-d233444b10f1" Feb 16 15:12:14 crc kubenswrapper[4748]: E0216 15:12:14.725329 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a" Feb 16 15:12:14 crc kubenswrapper[4748]: E0216 15:12:14.727298 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init-config-reloader,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,Command:[/bin/prometheus-config-reloader],Args:[--watch-interval=0 --listen-address=:8081 --config-file=/etc/prometheus/config/prometheus.yaml.gz --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1 --watched-dir=/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:reloader-init,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:SHARD,Value:0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/etc/prometheus/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:false,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-1,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-2,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lkwcz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(8ce4009b-ef44-4224-a7ab-0d514eccbabb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:12:14 crc kubenswrapper[4748]: E0216 15:12:14.729146 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" Feb 16 15:12:15 crc kubenswrapper[4748]: E0216 15:12:15.960091 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init-config-reloader\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" Feb 16 15:12:18 crc kubenswrapper[4748]: I0216 15:12:18.832418 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwfx"] Feb 16 15:12:19 crc kubenswrapper[4748]: W0216 15:12:19.604001 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062968b6_bfb6_4065_84b8_63521623319e.slice/crio-ba791ca8271548a5f663e4eb4671eb74fd3c98d8e10bd921fcc4a13e70c34d60 WatchSource:0}: Error finding container ba791ca8271548a5f663e4eb4671eb74fd3c98d8e10bd921fcc4a13e70c34d60: Status 404 returned error can't find the container with id ba791ca8271548a5f663e4eb4671eb74fd3c98d8e10bd921fcc4a13e70c34d60 Feb 16 15:12:20 crc kubenswrapper[4748]: I0216 15:12:20.238077 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwfx" event={"ID":"062968b6-bfb6-4065-84b8-63521623319e","Type":"ContainerStarted","Data":"ba791ca8271548a5f663e4eb4671eb74fd3c98d8e10bd921fcc4a13e70c34d60"} Feb 16 15:12:20 crc kubenswrapper[4748]: E0216 15:12:20.679009 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:12:20 crc kubenswrapper[4748]: E0216 15:12:20.679706 4748 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Feb 16 15:12:20 crc kubenswrapper[4748]: E0216 15:12:20.679978 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9cn79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(c06fd3c2-2bb4-40e8-8911-4f30daf28f43): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 16 15:12:20 crc kubenswrapper[4748]: E0216 15:12:20.681385 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" Feb 16 15:12:21 crc kubenswrapper[4748]: I0216 15:12:21.250962 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"23bef88b-c878-46e0-960b-f77594421c27","Type":"ContainerStarted","Data":"80326594295fe1aeba1fddff6e181cfa8178a25223322a754190eadcf399e8af"} Feb 16 15:12:21 crc kubenswrapper[4748]: I0216 15:12:21.252441 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 16 15:12:21 crc kubenswrapper[4748]: I0216 15:12:21.255380 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" event={"ID":"eded30d6-cdfa-48c2-b298-28242bb952d1","Type":"ContainerStarted","Data":"a27ec96184fa87ecc715548340446caeb2808c73651333e4c4d2455982c3f8db"} Feb 16 15:12:21 crc kubenswrapper[4748]: I0216 15:12:21.255629 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:12:21 crc kubenswrapper[4748]: E0216 15:12:21.259502 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" Feb 16 15:12:21 crc kubenswrapper[4748]: I0216 15:12:21.269188 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=27.252769609 podStartE2EDuration="48.26916958s" podCreationTimestamp="2026-02-16 15:11:33 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.195693932 +0000 UTC m=+1136.887362971" lastFinishedPulling="2026-02-16 15:12:12.212093903 +0000 UTC m=+1157.903762942" observedRunningTime="2026-02-16 15:12:21.268383081 +0000 UTC m=+1166.960052140" watchObservedRunningTime="2026-02-16 15:12:21.26916958 +0000 UTC m=+1166.960838629" Feb 16 15:12:21 crc kubenswrapper[4748]: I0216 15:12:21.285073 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" Feb 16 15:12:21 crc kubenswrapper[4748]: I0216 15:12:21.304304 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-wj7t8" podStartSLOduration=7.282221772 podStartE2EDuration="33.30427804s" podCreationTimestamp="2026-02-16 15:11:48 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.173871517 +0000 UTC m=+1136.865540556" lastFinishedPulling="2026-02-16 15:12:17.195927785 +0000 UTC m=+1162.887596824" observedRunningTime="2026-02-16 15:12:21.296592792 +0000 UTC m=+1166.988261831" watchObservedRunningTime="2026-02-16 15:12:21.30427804 +0000 UTC m=+1166.995947089" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.264049 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" event={"ID":"fcc01f6d-7536-43f6-bd86-a6eea7443783","Type":"ContainerStarted","Data":"b2120868c39d405fb5be57d7ff73a3a748f7a6d5cfa51d6ad3479a0df01e5b27"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.264350 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.265835 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" event={"ID":"c624e6e8-c1e8-433c-ad0f-603109d8fa32","Type":"ContainerStarted","Data":"f1fc2bb4db1247e674d72272dbb75ce8a0db080c201be1128dd6f89443dd26f9"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.265981 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.268149 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" event={"ID":"48850a43-a766-44bc-9426-b56c91be16d1","Type":"ContainerStarted","Data":"bc00f36bab544b5ecd5ccb3583088abaa158b69e9a5a5f81d5f6fc4102e1691b"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.268359 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.269707 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da81a287-d981-4b30-8d23-70cbc085368e","Type":"ContainerStarted","Data":"21b566650946c3d07bae6095b600490cc14ece89ad79a10ca04fd9753a21ae71"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.271182 4748 generic.go:334] "Generic (PLEG): container finished" podID="a288572d-d385-4a03-88d6-d0e4e120f062" containerID="a55f7536c37951098edcfd6a252222440cc60a53ac32a6d8d7f6184cac20c8e1" exitCode=0 Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.271255 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" event={"ID":"a288572d-d385-4a03-88d6-d0e4e120f062","Type":"ContainerDied","Data":"a55f7536c37951098edcfd6a252222440cc60a53ac32a6d8d7f6184cac20c8e1"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.272550 4748 generic.go:334] "Generic (PLEG): container finished" podID="bcca3048-157c-4d9d-9958-851e67a08b81" containerID="e96ed57b3e813aab5b6ddaa938cde1603f5b5de750b54b222cb102e8531e1ee5" exitCode=0 Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.272589 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" event={"ID":"bcca3048-157c-4d9d-9958-851e67a08b81","Type":"ContainerDied","Data":"e96ed57b3e813aab5b6ddaa938cde1603f5b5de750b54b222cb102e8531e1ee5"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.274556 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"ecd4cdcd-6dc0-4bba-980e-019d6eae5251","Type":"ContainerStarted","Data":"68a1da34eae6504dc5bd49b6cc61c99a2f392cc32017ed7243e83208ee9aff40"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.274889 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.280495 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8","Type":"ContainerStarted","Data":"c7830e3873c697ad7ae292fc9effec0a5291003f96d8b1808049017f11e250af"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.288971 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" podStartSLOduration=9.748805929 podStartE2EDuration="35.288955482s" podCreationTimestamp="2026-02-16 15:11:47 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.686549176 +0000 UTC m=+1137.378218215" lastFinishedPulling="2026-02-16 15:12:17.226698719 +0000 UTC m=+1162.918367768" observedRunningTime="2026-02-16 15:12:22.282966735 +0000 UTC m=+1167.974635774" watchObservedRunningTime="2026-02-16 15:12:22.288955482 +0000 UTC m=+1167.980624511" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.300269 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e891de95-67f1-4cdd-8913-747978f44a1e","Type":"ContainerStarted","Data":"6c42890569f63b3c43585f8bf7127e6f4258a40ee001a371d150e8b71ed8d827"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.331912 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.340007 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rhspv" event={"ID":"62617783-e02a-4d59-b7a1-36206106585b","Type":"ContainerStarted","Data":"2e5272b02885c41e2a45c47011e7c5b181eff1cefa905107e970af57e28e0e9e"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.367087 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=7.055744806 podStartE2EDuration="35.367069015s" podCreationTimestamp="2026-02-16 15:11:47 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.702885607 +0000 UTC m=+1137.394554636" lastFinishedPulling="2026-02-16 15:12:20.014209806 +0000 UTC m=+1165.705878845" observedRunningTime="2026-02-16 15:12:22.366373568 +0000 UTC m=+1168.058042607" watchObservedRunningTime="2026-02-16 15:12:22.367069015 +0000 UTC m=+1168.058738044" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.383337 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"54cc1946-a258-47ba-9460-d27cae5b2b9f","Type":"ContainerStarted","Data":"a697923a2db2948db75b136dee930d9b7c38dc9cd44d427dc5bf86946a58a2ee"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.384275 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.396506 4748 generic.go:334] "Generic (PLEG): container finished" podID="062968b6-bfb6-4065-84b8-63521623319e" containerID="406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9" exitCode=0 Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.396634 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwfx" event={"ID":"062968b6-bfb6-4065-84b8-63521623319e","Type":"ContainerDied","Data":"406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.398942 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" event={"ID":"8748ce40-6f4e-417f-919b-5ce0b40ebf43","Type":"ContainerStarted","Data":"e6bcff8cbb4b861b1088a4672bf0ab9bb7a0d0d89d9dc949e1495d4ae59c6571"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.399037 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.400529 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"42d40dab-aebb-44fa-ac5a-9100d1b1fb48","Type":"ContainerStarted","Data":"1fa15867067128396a83d6fa6a417138058efc2999470ec2bd302cddbcd35664"} Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.449223 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-vxq6m" podStartSLOduration=6.325424453 podStartE2EDuration="34.449191227s" podCreationTimestamp="2026-02-16 15:11:48 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.665708986 +0000 UTC m=+1137.357378025" lastFinishedPulling="2026-02-16 15:12:19.78947576 +0000 UTC m=+1165.481144799" observedRunningTime="2026-02-16 15:12:22.447851044 +0000 UTC m=+1168.139520083" watchObservedRunningTime="2026-02-16 15:12:22.449191227 +0000 UTC m=+1168.140860266" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.473479 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" podStartSLOduration=7.797523107 podStartE2EDuration="35.473457421s" podCreationTimestamp="2026-02-16 15:11:47 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.676934321 +0000 UTC m=+1137.368603360" lastFinishedPulling="2026-02-16 15:12:19.352868635 +0000 UTC m=+1165.044537674" observedRunningTime="2026-02-16 15:12:22.472254692 +0000 UTC m=+1168.163923741" watchObservedRunningTime="2026-02-16 15:12:22.473457421 +0000 UTC m=+1168.165126460" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.549153 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" podStartSLOduration=9.343859639 podStartE2EDuration="35.549117175s" podCreationTimestamp="2026-02-16 15:11:47 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.675346782 +0000 UTC m=+1137.367015821" lastFinishedPulling="2026-02-16 15:12:17.880604318 +0000 UTC m=+1163.572273357" observedRunningTime="2026-02-16 15:12:22.518627588 +0000 UTC m=+1168.210296627" watchObservedRunningTime="2026-02-16 15:12:22.549117175 +0000 UTC m=+1168.240786214" Feb 16 15:12:22 crc kubenswrapper[4748]: I0216 15:12:22.652777 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=8.457045732 podStartE2EDuration="34.652749014s" podCreationTimestamp="2026-02-16 15:11:48 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.684887906 +0000 UTC m=+1137.376556945" lastFinishedPulling="2026-02-16 15:12:17.880591188 +0000 UTC m=+1163.572260227" observedRunningTime="2026-02-16 15:12:22.636393693 +0000 UTC m=+1168.328062732" watchObservedRunningTime="2026-02-16 15:12:22.652749014 +0000 UTC m=+1168.344418043" Feb 16 15:12:23 crc kubenswrapper[4748]: I0216 15:12:23.413632 4748 generic.go:334] "Generic (PLEG): container finished" podID="62617783-e02a-4d59-b7a1-36206106585b" containerID="2e5272b02885c41e2a45c47011e7c5b181eff1cefa905107e970af57e28e0e9e" exitCode=0 Feb 16 15:12:23 crc kubenswrapper[4748]: I0216 15:12:23.413764 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rhspv" event={"ID":"62617783-e02a-4d59-b7a1-36206106585b","Type":"ContainerDied","Data":"2e5272b02885c41e2a45c47011e7c5b181eff1cefa905107e970af57e28e0e9e"} Feb 16 15:12:23 crc kubenswrapper[4748]: I0216 15:12:23.416594 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b081f805-b462-406b-9d37-5aef68dd9edc","Type":"ContainerStarted","Data":"76ef5b5c0f1948a2c1a32e73350d994a797b036d9230aae2020e77c7e988c599"} Feb 16 15:12:23 crc kubenswrapper[4748]: I0216 15:12:23.450627 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=8.353999271 podStartE2EDuration="36.45061045s" podCreationTimestamp="2026-02-16 15:11:47 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.692684217 +0000 UTC m=+1137.384353256" lastFinishedPulling="2026-02-16 15:12:19.789295396 +0000 UTC m=+1165.480964435" observedRunningTime="2026-02-16 15:12:22.761385655 +0000 UTC m=+1168.453054684" watchObservedRunningTime="2026-02-16 15:12:23.45061045 +0000 UTC m=+1169.142279489" Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.434113 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" event={"ID":"a288572d-d385-4a03-88d6-d0e4e120f062","Type":"ContainerStarted","Data":"0a7378aa950acfb3671a00efe685e078f33bde04d1a18cc29504d235c3fbb51f"} Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.434816 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.437168 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" event={"ID":"bcca3048-157c-4d9d-9958-851e67a08b81","Type":"ContainerStarted","Data":"45a27db0f6c3e53ec60a90c2a8fca38fb5074588989e980014ca52b1b910397b"} Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.438633 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.448771 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rhspv" event={"ID":"62617783-e02a-4d59-b7a1-36206106585b","Type":"ContainerStarted","Data":"8d8c1c094b8e88c6d63542c3bcf5bfc485d3a2494713f9243e7f1e3d1b4b738a"} Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.450650 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1","Type":"ContainerStarted","Data":"6ef2ee6f5aaa711ece4f69651084dda56f2106d8aceb2686a3d852404459742d"} Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.452438 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8q6bn" event={"ID":"5282d6ba-c0a4-4ada-9ffb-d233444b10f1","Type":"ContainerStarted","Data":"52eacec338a0cf70d4fc39bc30c58c50f150a313deb68f0d4c2141587e69cc47"} Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.452683 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-8q6bn" Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.455268 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c","Type":"ContainerStarted","Data":"2185a4b3e77e6eec56ec61a35fe0895401c7974e33e2d24b699ae6820cddd049"} Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.456604 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"712fd752-5464-47c2-851e-b5b54a2cf335","Type":"ContainerStarted","Data":"751ad63fb3839b5a0259b6d33b61806d1d9a78fa17aff4f149c093a5f3d8475c"} Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.482832 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" podStartSLOduration=5.18654023 podStartE2EDuration="55.482815867s" podCreationTimestamp="2026-02-16 15:11:29 +0000 UTC" firstStartedPulling="2026-02-16 15:11:30.39192443 +0000 UTC m=+1116.083593479" lastFinishedPulling="2026-02-16 15:12:20.688200077 +0000 UTC m=+1166.379869116" observedRunningTime="2026-02-16 15:12:24.467784868 +0000 UTC m=+1170.159453907" watchObservedRunningTime="2026-02-16 15:12:24.482815867 +0000 UTC m=+1170.174484906" Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.559355 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-8q6bn" podStartSLOduration=11.102130829 podStartE2EDuration="44.559328701s" podCreationTimestamp="2026-02-16 15:11:40 +0000 UTC" firstStartedPulling="2026-02-16 15:11:50.1326911 +0000 UTC m=+1135.824360139" lastFinishedPulling="2026-02-16 15:12:23.589888972 +0000 UTC m=+1169.281558011" observedRunningTime="2026-02-16 15:12:24.556297097 +0000 UTC m=+1170.247966146" watchObservedRunningTime="2026-02-16 15:12:24.559328701 +0000 UTC m=+1170.250997740" Feb 16 15:12:24 crc kubenswrapper[4748]: I0216 15:12:24.584501 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" podStartSLOduration=9.494064493 podStartE2EDuration="55.584480697s" podCreationTimestamp="2026-02-16 15:11:29 +0000 UTC" firstStartedPulling="2026-02-16 15:11:33.698943214 +0000 UTC m=+1119.390612253" lastFinishedPulling="2026-02-16 15:12:19.789359418 +0000 UTC m=+1165.481028457" observedRunningTime="2026-02-16 15:12:24.583071493 +0000 UTC m=+1170.274740532" watchObservedRunningTime="2026-02-16 15:12:24.584480697 +0000 UTC m=+1170.276149736" Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.467889 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwfx" event={"ID":"062968b6-bfb6-4065-84b8-63521623319e","Type":"ContainerStarted","Data":"dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b"} Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.470100 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"e891de95-67f1-4cdd-8913-747978f44a1e","Type":"ContainerStarted","Data":"efe863c787354d9849e3c34b09ddb3f0a51e128d287118b3532a477ba3c2ddff"} Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.472067 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rhspv" event={"ID":"62617783-e02a-4d59-b7a1-36206106585b","Type":"ContainerStarted","Data":"1ef85f4af9af240097c160bbe7a9bb729908fed94f0a0aa19bbef38497c90dc3"} Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.472242 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.472267 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.473589 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"252aec5a-72dd-4699-b9b8-72dc1c8bd1a8","Type":"ContainerStarted","Data":"6638b39318c5dd3159eab728fcba7dadde72e773f2fd83c603ca81aa0253ffc4"} Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.565773 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=14.231841734 podStartE2EDuration="47.565756346s" podCreationTimestamp="2026-02-16 15:11:38 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.698053818 +0000 UTC m=+1137.389722857" lastFinishedPulling="2026-02-16 15:12:25.03196843 +0000 UTC m=+1170.723637469" observedRunningTime="2026-02-16 15:12:25.557621927 +0000 UTC m=+1171.249290966" watchObservedRunningTime="2026-02-16 15:12:25.565756346 +0000 UTC m=+1171.257425385" Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.641223 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rhspv" podStartSLOduration=31.247182915 podStartE2EDuration="45.641165594s" podCreationTimestamp="2026-02-16 15:11:40 +0000 UTC" firstStartedPulling="2026-02-16 15:12:02.801365042 +0000 UTC m=+1148.493034081" lastFinishedPulling="2026-02-16 15:12:17.195347711 +0000 UTC m=+1162.887016760" observedRunningTime="2026-02-16 15:12:25.613082376 +0000 UTC m=+1171.304751415" watchObservedRunningTime="2026-02-16 15:12:25.641165594 +0000 UTC m=+1171.332834633" Feb 16 15:12:25 crc kubenswrapper[4748]: I0216 15:12:25.717329 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=9.418314583 podStartE2EDuration="42.717306979s" podCreationTimestamp="2026-02-16 15:11:43 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.731032486 +0000 UTC m=+1137.422701525" lastFinishedPulling="2026-02-16 15:12:25.030024882 +0000 UTC m=+1170.721693921" observedRunningTime="2026-02-16 15:12:25.688528494 +0000 UTC m=+1171.380197533" watchObservedRunningTime="2026-02-16 15:12:25.717306979 +0000 UTC m=+1171.408976018" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.484157 4748 generic.go:334] "Generic (PLEG): container finished" podID="062968b6-bfb6-4065-84b8-63521623319e" containerID="dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b" exitCode=0 Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.484837 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwfx" event={"ID":"062968b6-bfb6-4065-84b8-63521623319e","Type":"ContainerDied","Data":"dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b"} Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.796969 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6nflw"] Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.798559 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.801751 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.820109 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6nflw"] Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.885240 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-config\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.885334 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.885369 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-kube-api-access-vmvq7\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.885392 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-combined-ca-bundle\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.885476 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-ovs-rundir\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.885516 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-ovn-rundir\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.960480 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.973288 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tzdz5"] Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.973619 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" containerName="dnsmasq-dns" containerID="cri-o://45a27db0f6c3e53ec60a90c2a8fca38fb5074588989e980014ca52b1b910397b" gracePeriod=10 Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.990107 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-kube-api-access-vmvq7\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.990178 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-combined-ca-bundle\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.990263 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-ovs-rundir\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.990292 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-ovn-rundir\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.990338 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-config\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.990396 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.992299 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-ovs-rundir\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.992321 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-ovn-rundir\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:26 crc kubenswrapper[4748]: I0216 15:12:26.993549 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-config\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.002543 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.021504 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-combined-ca-bundle\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.057960 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmvq7\" (UniqueName: \"kubernetes.io/projected/5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2-kube-api-access-vmvq7\") pod \"ovn-controller-metrics-6nflw\" (UID: \"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2\") " pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.059946 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hncsb"] Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.061580 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hncsb"] Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.061663 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.062386 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.066685 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.123557 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6nflw" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.193402 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2zvv2"] Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.193607 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" containerName="dnsmasq-dns" containerID="cri-o://0a7378aa950acfb3671a00efe685e078f33bde04d1a18cc29504d235c3fbb51f" gracePeriod=10 Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.194822 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdrtl\" (UniqueName: \"kubernetes.io/projected/f3015788-8e77-4cad-9a4f-e7fdf877d085-kube-api-access-cdrtl\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.194874 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-config\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.194941 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.194964 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.215340 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dz5n4"] Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.217634 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.221543 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.239133 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dz5n4"] Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.297179 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvdt4\" (UniqueName: \"kubernetes.io/projected/01a9253f-ccde-45fd-801d-78e2ce029d23-kube-api-access-bvdt4\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.297228 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-config\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.297258 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.298172 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.298259 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.298284 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdrtl\" (UniqueName: \"kubernetes.io/projected/f3015788-8e77-4cad-9a4f-e7fdf877d085-kube-api-access-cdrtl\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.298352 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.298409 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.298449 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-config\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.299731 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-config\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.300367 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.300943 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.328442 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdrtl\" (UniqueName: \"kubernetes.io/projected/f3015788-8e77-4cad-9a4f-e7fdf877d085-kube-api-access-cdrtl\") pod \"dnsmasq-dns-7f896c8c65-hncsb\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.372252 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.400010 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.400083 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.400117 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.400221 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvdt4\" (UniqueName: \"kubernetes.io/projected/01a9253f-ccde-45fd-801d-78e2ce029d23-kube-api-access-bvdt4\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.400259 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-config\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.401598 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-config\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.401647 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.401676 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.402348 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.421478 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvdt4\" (UniqueName: \"kubernetes.io/projected/01a9253f-ccde-45fd-801d-78e2ce029d23-kube-api-access-bvdt4\") pod \"dnsmasq-dns-86db49b7ff-dz5n4\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.509917 4748 generic.go:334] "Generic (PLEG): container finished" podID="a288572d-d385-4a03-88d6-d0e4e120f062" containerID="0a7378aa950acfb3671a00efe685e078f33bde04d1a18cc29504d235c3fbb51f" exitCode=0 Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.510379 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" event={"ID":"a288572d-d385-4a03-88d6-d0e4e120f062","Type":"ContainerDied","Data":"0a7378aa950acfb3671a00efe685e078f33bde04d1a18cc29504d235c3fbb51f"} Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.517952 4748 generic.go:334] "Generic (PLEG): container finished" podID="bcca3048-157c-4d9d-9958-851e67a08b81" containerID="45a27db0f6c3e53ec60a90c2a8fca38fb5074588989e980014ca52b1b910397b" exitCode=0 Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.518094 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" event={"ID":"bcca3048-157c-4d9d-9958-851e67a08b81","Type":"ContainerDied","Data":"45a27db0f6c3e53ec60a90c2a8fca38fb5074588989e980014ca52b1b910397b"} Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.527914 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwfx" event={"ID":"062968b6-bfb6-4065-84b8-63521623319e","Type":"ContainerStarted","Data":"affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f"} Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.528275 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.554678 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lbwfx" podStartSLOduration=21.982271676 podStartE2EDuration="26.554657219s" podCreationTimestamp="2026-02-16 15:12:01 +0000 UTC" firstStartedPulling="2026-02-16 15:12:22.493170874 +0000 UTC m=+1168.184839913" lastFinishedPulling="2026-02-16 15:12:27.065556427 +0000 UTC m=+1172.757225456" observedRunningTime="2026-02-16 15:12:27.547971285 +0000 UTC m=+1173.239640324" watchObservedRunningTime="2026-02-16 15:12:27.554657219 +0000 UTC m=+1173.246326258" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.583274 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.617623 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.700196 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.719729 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-dns-svc\") pod \"bcca3048-157c-4d9d-9958-851e67a08b81\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.719801 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-config\") pod \"bcca3048-157c-4d9d-9958-851e67a08b81\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.720023 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtzq8\" (UniqueName: \"kubernetes.io/projected/bcca3048-157c-4d9d-9958-851e67a08b81-kube-api-access-mtzq8\") pod \"bcca3048-157c-4d9d-9958-851e67a08b81\" (UID: \"bcca3048-157c-4d9d-9958-851e67a08b81\") " Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.735049 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcca3048-157c-4d9d-9958-851e67a08b81-kube-api-access-mtzq8" (OuterVolumeSpecName: "kube-api-access-mtzq8") pod "bcca3048-157c-4d9d-9958-851e67a08b81" (UID: "bcca3048-157c-4d9d-9958-851e67a08b81"). InnerVolumeSpecName "kube-api-access-mtzq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.806179 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-config" (OuterVolumeSpecName: "config") pod "bcca3048-157c-4d9d-9958-851e67a08b81" (UID: "bcca3048-157c-4d9d-9958-851e67a08b81"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.812315 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bcca3048-157c-4d9d-9958-851e67a08b81" (UID: "bcca3048-157c-4d9d-9958-851e67a08b81"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.823512 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtzq8\" (UniqueName: \"kubernetes.io/projected/bcca3048-157c-4d9d-9958-851e67a08b81-kube-api-access-mtzq8\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.823576 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.823588 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcca3048-157c-4d9d-9958-851e67a08b81-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.829036 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.925922 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trvj2\" (UniqueName: \"kubernetes.io/projected/a288572d-d385-4a03-88d6-d0e4e120f062-kube-api-access-trvj2\") pod \"a288572d-d385-4a03-88d6-d0e4e120f062\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.926044 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-config\") pod \"a288572d-d385-4a03-88d6-d0e4e120f062\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.926160 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-dns-svc\") pod \"a288572d-d385-4a03-88d6-d0e4e120f062\" (UID: \"a288572d-d385-4a03-88d6-d0e4e120f062\") " Feb 16 15:12:27 crc kubenswrapper[4748]: I0216 15:12:27.933639 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a288572d-d385-4a03-88d6-d0e4e120f062-kube-api-access-trvj2" (OuterVolumeSpecName: "kube-api-access-trvj2") pod "a288572d-d385-4a03-88d6-d0e4e120f062" (UID: "a288572d-d385-4a03-88d6-d0e4e120f062"). InnerVolumeSpecName "kube-api-access-trvj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.009756 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6nflw"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.022161 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a288572d-d385-4a03-88d6-d0e4e120f062" (UID: "a288572d-d385-4a03-88d6-d0e4e120f062"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.059366 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trvj2\" (UniqueName: \"kubernetes.io/projected/a288572d-d385-4a03-88d6-d0e4e120f062-kube-api-access-trvj2\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.059673 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.119524 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-config" (OuterVolumeSpecName: "config") pod "a288572d-d385-4a03-88d6-d0e4e120f062" (UID: "a288572d-d385-4a03-88d6-d0e4e120f062"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.163862 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a288572d-d385-4a03-88d6-d0e4e120f062-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:28 crc kubenswrapper[4748]: W0216 15:12:28.207406 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3015788_8e77_4cad_9a4f_e7fdf877d085.slice/crio-e116b75f084e3d52b4e7fb8b21a9dec1599b82b60ca2aee3d869f0d857887577 WatchSource:0}: Error finding container e116b75f084e3d52b4e7fb8b21a9dec1599b82b60ca2aee3d869f0d857887577: Status 404 returned error can't find the container with id e116b75f084e3d52b4e7fb8b21a9dec1599b82b60ca2aee3d869f0d857887577 Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.221762 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hncsb"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.257207 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.348421 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dz5n4"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.364004 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:28 crc kubenswrapper[4748]: W0216 15:12:28.401935 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01a9253f_ccde_45fd_801d_78e2ce029d23.slice/crio-71d27d4da6ea4e94c63116e97b25c305933dbfeadd85e481955e5022db0960a3 WatchSource:0}: Error finding container 71d27d4da6ea4e94c63116e97b25c305933dbfeadd85e481955e5022db0960a3: Status 404 returned error can't find the container with id 71d27d4da6ea4e94c63116e97b25c305933dbfeadd85e481955e5022db0960a3 Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.535601 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6nflw" event={"ID":"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2","Type":"ContainerStarted","Data":"62d9e5eb47d6f66bf1fa88b03ee3eb26854a776bcc53e9ec6bbccee1c855a887"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.535653 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6nflw" event={"ID":"5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2","Type":"ContainerStarted","Data":"20354eae8648ccbb1465ac531fb67b1b4439633daa3abdd3b69a02cbc6b9efd1"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.537190 4748 generic.go:334] "Generic (PLEG): container finished" podID="712fd752-5464-47c2-851e-b5b54a2cf335" containerID="751ad63fb3839b5a0259b6d33b61806d1d9a78fa17aff4f149c093a5f3d8475c" exitCode=0 Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.537241 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"712fd752-5464-47c2-851e-b5b54a2cf335","Type":"ContainerDied","Data":"751ad63fb3839b5a0259b6d33b61806d1d9a78fa17aff4f149c093a5f3d8475c"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.541154 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" event={"ID":"a288572d-d385-4a03-88d6-d0e4e120f062","Type":"ContainerDied","Data":"e0b0e717ac89a0a5608a9a7de92be16680126dd2d69f22afea43c3c7e8a9ed35"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.541204 4748 scope.go:117] "RemoveContainer" containerID="0a7378aa950acfb3671a00efe685e078f33bde04d1a18cc29504d235c3fbb51f" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.541336 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-2zvv2" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.557752 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6nflw" podStartSLOduration=2.557732182 podStartE2EDuration="2.557732182s" podCreationTimestamp="2026-02-16 15:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:28.556263716 +0000 UTC m=+1174.247932745" watchObservedRunningTime="2026-02-16 15:12:28.557732182 +0000 UTC m=+1174.249401221" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.564973 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.564973 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-tzdz5" event={"ID":"bcca3048-157c-4d9d-9958-851e67a08b81","Type":"ContainerDied","Data":"797a36112f759ad9d0519051f330c4bb4787a05f500ec02e7118bf726c8e7eee"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.575725 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" event={"ID":"01a9253f-ccde-45fd-801d-78e2ce029d23","Type":"ContainerStarted","Data":"71d27d4da6ea4e94c63116e97b25c305933dbfeadd85e481955e5022db0960a3"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.594837 4748 generic.go:334] "Generic (PLEG): container finished" podID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerID="89e1d30c886fa4fa38ab1f8757036f9dcd2c8f1019761bf67eed5a086c55a611" exitCode=0 Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.596338 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" event={"ID":"f3015788-8e77-4cad-9a4f-e7fdf877d085","Type":"ContainerDied","Data":"89e1d30c886fa4fa38ab1f8757036f9dcd2c8f1019761bf67eed5a086c55a611"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.596368 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.596379 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" event={"ID":"f3015788-8e77-4cad-9a4f-e7fdf877d085","Type":"ContainerStarted","Data":"e116b75f084e3d52b4e7fb8b21a9dec1599b82b60ca2aee3d869f0d857887577"} Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.651022 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.663659 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2zvv2"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.664933 4748 scope.go:117] "RemoveContainer" containerID="a55f7536c37951098edcfd6a252222440cc60a53ac32a6d8d7f6184cac20c8e1" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.681043 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-2zvv2"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.697896 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tzdz5"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.713123 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-tzdz5"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.740015 4748 scope.go:117] "RemoveContainer" containerID="45a27db0f6c3e53ec60a90c2a8fca38fb5074588989e980014ca52b1b910397b" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.826038 4748 scope.go:117] "RemoveContainer" containerID="e96ed57b3e813aab5b6ddaa938cde1603f5b5de750b54b222cb102e8531e1ee5" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.869383 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:28 crc kubenswrapper[4748]: E0216 15:12:28.869828 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" containerName="dnsmasq-dns" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.869850 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" containerName="dnsmasq-dns" Feb 16 15:12:28 crc kubenswrapper[4748]: E0216 15:12:28.869876 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" containerName="dnsmasq-dns" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.869883 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" containerName="dnsmasq-dns" Feb 16 15:12:28 crc kubenswrapper[4748]: E0216 15:12:28.869900 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" containerName="init" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.869906 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" containerName="init" Feb 16 15:12:28 crc kubenswrapper[4748]: E0216 15:12:28.869922 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" containerName="init" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.869927 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" containerName="init" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.870081 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" containerName="dnsmasq-dns" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.870093 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" containerName="dnsmasq-dns" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.871162 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.873816 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qdrnr" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.874289 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.875306 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.875623 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.876237 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.880390 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.880425 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.880444 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.880518 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/62eb48ad-9b6f-4da0-befd-f14a9e32e031-kube-api-access-bll7m\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.880556 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62eb48ad-9b6f-4da0-befd-f14a9e32e031-scripts\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.880571 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62eb48ad-9b6f-4da0-befd-f14a9e32e031-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.880604 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62eb48ad-9b6f-4da0-befd-f14a9e32e031-config\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.981962 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62eb48ad-9b6f-4da0-befd-f14a9e32e031-scripts\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.982008 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62eb48ad-9b6f-4da0-befd-f14a9e32e031-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.982059 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62eb48ad-9b6f-4da0-befd-f14a9e32e031-config\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.982112 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.982133 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.982154 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.982259 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/62eb48ad-9b6f-4da0-befd-f14a9e32e031-kube-api-access-bll7m\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.984541 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62eb48ad-9b6f-4da0-befd-f14a9e32e031-config\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.985391 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/62eb48ad-9b6f-4da0-befd-f14a9e32e031-scripts\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.985841 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/62eb48ad-9b6f-4da0-befd-f14a9e32e031-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.991609 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.991709 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:28 crc kubenswrapper[4748]: I0216 15:12:28.992021 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/62eb48ad-9b6f-4da0-befd-f14a9e32e031-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.018041 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bll7m\" (UniqueName: \"kubernetes.io/projected/62eb48ad-9b6f-4da0-befd-f14a9e32e031-kube-api-access-bll7m\") pod \"ovn-northd-0\" (UID: \"62eb48ad-9b6f-4da0-befd-f14a9e32e031\") " pod="openstack/ovn-northd-0" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.026542 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a288572d-d385-4a03-88d6-d0e4e120f062" path="/var/lib/kubelet/pods/a288572d-d385-4a03-88d6-d0e4e120f062/volumes" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.028686 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcca3048-157c-4d9d-9958-851e67a08b81" path="/var/lib/kubelet/pods/bcca3048-157c-4d9d-9958-851e67a08b81/volumes" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.113910 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.270429 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.284224 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.604055 4748 generic.go:334] "Generic (PLEG): container finished" podID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerID="b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3" exitCode=0 Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.604234 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" event={"ID":"01a9253f-ccde-45fd-801d-78e2ce029d23","Type":"ContainerDied","Data":"b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3"} Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.607949 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" event={"ID":"f3015788-8e77-4cad-9a4f-e7fdf877d085","Type":"ContainerStarted","Data":"363a66b8f128d6cb639dd5bc02b9989a34328c1f65db9c6a2db66ae431a20732"} Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.608098 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.609848 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"712fd752-5464-47c2-851e-b5b54a2cf335","Type":"ContainerStarted","Data":"947cbf80d1be5a85df9d41af854c6774abfdacc586e51c4bff60ee6aa142781d"} Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.613000 4748 generic.go:334] "Generic (PLEG): container finished" podID="ccc28d79-7cdc-4fac-95bb-2f041b1f25f1" containerID="6ef2ee6f5aaa711ece4f69651084dda56f2106d8aceb2686a3d852404459742d" exitCode=0 Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.613133 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1","Type":"ContainerDied","Data":"6ef2ee6f5aaa711ece4f69651084dda56f2106d8aceb2686a3d852404459742d"} Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.709588 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=26.05781253 podStartE2EDuration="58.709566269s" podCreationTimestamp="2026-02-16 15:11:31 +0000 UTC" firstStartedPulling="2026-02-16 15:11:50.855086298 +0000 UTC m=+1136.546755337" lastFinishedPulling="2026-02-16 15:12:23.506840047 +0000 UTC m=+1169.198509076" observedRunningTime="2026-02-16 15:12:29.695459583 +0000 UTC m=+1175.387128622" watchObservedRunningTime="2026-02-16 15:12:29.709566269 +0000 UTC m=+1175.401235308" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.735215 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" podStartSLOduration=3.7351501860000003 podStartE2EDuration="3.735150186s" podCreationTimestamp="2026-02-16 15:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:29.730272056 +0000 UTC m=+1175.421941095" watchObservedRunningTime="2026-02-16 15:12:29.735150186 +0000 UTC m=+1175.426819225" Feb 16 15:12:29 crc kubenswrapper[4748]: I0216 15:12:29.763570 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 16 15:12:29 crc kubenswrapper[4748]: W0216 15:12:29.849456 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62eb48ad_9b6f_4da0_befd_f14a9e32e031.slice/crio-ece9bf2edb49bbcf08c06e14c33740452bd26131301452c5b8705f919755af73 WatchSource:0}: Error finding container ece9bf2edb49bbcf08c06e14c33740452bd26131301452c5b8705f919755af73: Status 404 returned error can't find the container with id ece9bf2edb49bbcf08c06e14c33740452bd26131301452c5b8705f919755af73 Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.622499 4748 generic.go:334] "Generic (PLEG): container finished" podID="ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c" containerID="2185a4b3e77e6eec56ec61a35fe0895401c7974e33e2d24b699ae6820cddd049" exitCode=0 Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.622636 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c","Type":"ContainerDied","Data":"2185a4b3e77e6eec56ec61a35fe0895401c7974e33e2d24b699ae6820cddd049"} Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.628078 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ccc28d79-7cdc-4fac-95bb-2f041b1f25f1","Type":"ContainerStarted","Data":"8a6cb40149414a3794694f340d877d927866ad871443dc17216cef22f3bfd549"} Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.630398 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" event={"ID":"01a9253f-ccde-45fd-801d-78e2ce029d23","Type":"ContainerStarted","Data":"2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4"} Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.630914 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.632931 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62eb48ad-9b6f-4da0-befd-f14a9e32e031","Type":"ContainerStarted","Data":"ece9bf2edb49bbcf08c06e14c33740452bd26131301452c5b8705f919755af73"} Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.710761 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" podStartSLOduration=3.7107243260000002 podStartE2EDuration="3.710724326s" podCreationTimestamp="2026-02-16 15:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:30.673644987 +0000 UTC m=+1176.365314026" watchObservedRunningTime="2026-02-16 15:12:30.710724326 +0000 UTC m=+1176.402393365" Feb 16 15:12:30 crc kubenswrapper[4748]: I0216 15:12:30.721073 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=26.372955877 podStartE2EDuration="58.721047548s" podCreationTimestamp="2026-02-16 15:11:32 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.179909835 +0000 UTC m=+1136.871578874" lastFinishedPulling="2026-02-16 15:12:23.528001506 +0000 UTC m=+1169.219670545" observedRunningTime="2026-02-16 15:12:30.703386176 +0000 UTC m=+1176.395055225" watchObservedRunningTime="2026-02-16 15:12:30.721047548 +0000 UTC m=+1176.412716587" Feb 16 15:12:31 crc kubenswrapper[4748]: I0216 15:12:31.641871 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62eb48ad-9b6f-4da0-befd-f14a9e32e031","Type":"ContainerStarted","Data":"5b392db8415d0ed9cdff46dd6eda951399bb36add056da8b6d19ceb65da7e293"} Feb 16 15:12:31 crc kubenswrapper[4748]: I0216 15:12:31.642383 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"62eb48ad-9b6f-4da0-befd-f14a9e32e031","Type":"ContainerStarted","Data":"acac6965e032b7e14f794c1e7e22d0b87affa7d811442872aee6e2aea3297c70"} Feb 16 15:12:31 crc kubenswrapper[4748]: I0216 15:12:31.871969 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:31 crc kubenswrapper[4748]: I0216 15:12:31.872023 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:32 crc kubenswrapper[4748]: I0216 15:12:32.015904 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.745160909 podStartE2EDuration="4.015873129s" podCreationTimestamp="2026-02-16 15:12:28 +0000 UTC" firstStartedPulling="2026-02-16 15:12:29.851070066 +0000 UTC m=+1175.542739105" lastFinishedPulling="2026-02-16 15:12:31.121782286 +0000 UTC m=+1176.813451325" observedRunningTime="2026-02-16 15:12:31.663352803 +0000 UTC m=+1177.355021842" watchObservedRunningTime="2026-02-16 15:12:32.015873129 +0000 UTC m=+1177.707542168" Feb 16 15:12:32 crc kubenswrapper[4748]: I0216 15:12:32.405172 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 16 15:12:32 crc kubenswrapper[4748]: I0216 15:12:32.405214 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 16 15:12:32 crc kubenswrapper[4748]: I0216 15:12:32.651910 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 16 15:12:32 crc kubenswrapper[4748]: I0216 15:12:32.928968 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lbwfx" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="registry-server" probeResult="failure" output=< Feb 16 15:12:32 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:12:32 crc kubenswrapper[4748]: > Feb 16 15:12:33 crc kubenswrapper[4748]: I0216 15:12:33.661795 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c","Type":"ContainerStarted","Data":"614ba93e1a9ec88e6618237ed107365409620ef38fefd3b93c930149a7ea249c"} Feb 16 15:12:33 crc kubenswrapper[4748]: I0216 15:12:33.663544 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c06fd3c2-2bb4-40e8-8911-4f30daf28f43","Type":"ContainerStarted","Data":"ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a"} Feb 16 15:12:33 crc kubenswrapper[4748]: I0216 15:12:33.688077 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=16.528105323 podStartE2EDuration="57.688051423s" podCreationTimestamp="2026-02-16 15:11:36 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.196185194 +0000 UTC m=+1136.887854233" lastFinishedPulling="2026-02-16 15:12:32.356131294 +0000 UTC m=+1178.047800333" observedRunningTime="2026-02-16 15:12:33.678652003 +0000 UTC m=+1179.370321042" watchObservedRunningTime="2026-02-16 15:12:33.688051423 +0000 UTC m=+1179.379720462" Feb 16 15:12:33 crc kubenswrapper[4748]: I0216 15:12:33.768437 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:33 crc kubenswrapper[4748]: I0216 15:12:33.768515 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:34 crc kubenswrapper[4748]: I0216 15:12:34.673308 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerStarted","Data":"f38afe69bdd0099b17b3cd0853c0364ce1d1e25cd581aff8af825aca46249b6c"} Feb 16 15:12:34 crc kubenswrapper[4748]: I0216 15:12:34.729091 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:12:34 crc kubenswrapper[4748]: I0216 15:12:34.729171 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.437618 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hncsb"] Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.445089 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" podUID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerName="dnsmasq-dns" containerID="cri-o://363a66b8f128d6cb639dd5bc02b9989a34328c1f65db9c6a2db66ae431a20732" gracePeriod=10 Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.446985 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.460725 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.491655 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-9np95"] Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.493236 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.545600 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9np95"] Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.594035 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.649626 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnvsg\" (UniqueName: \"kubernetes.io/projected/37fffb84-58d3-4922-b28c-d85aa6986ce7-kube-api-access-rnvsg\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.649761 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.649805 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.649838 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-dns-svc\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.650032 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.693150 4748 generic.go:334] "Generic (PLEG): container finished" podID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerID="363a66b8f128d6cb639dd5bc02b9989a34328c1f65db9c6a2db66ae431a20732" exitCode=0 Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.693872 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" event={"ID":"f3015788-8e77-4cad-9a4f-e7fdf877d085","Type":"ContainerDied","Data":"363a66b8f128d6cb639dd5bc02b9989a34328c1f65db9c6a2db66ae431a20732"} Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.695625 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c","Type":"ContainerStarted","Data":"47f1400fc1396ce8b9d7f148a51372001f748277063294ba90379a6b4ffefede"} Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.698255 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.710081 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.752878 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.753050 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnvsg\" (UniqueName: \"kubernetes.io/projected/37fffb84-58d3-4922-b28c-d85aa6986ce7-kube-api-access-rnvsg\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.753096 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.753120 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.753146 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-dns-svc\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.754249 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.754744 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-dns-svc\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.755420 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.755530 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.801261 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=19.386508677 podStartE2EDuration="1m0.801220388s" podCreationTimestamp="2026-02-16 15:11:36 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.642896037 +0000 UTC m=+1137.334565076" lastFinishedPulling="2026-02-16 15:12:33.057607748 +0000 UTC m=+1178.749276787" observedRunningTime="2026-02-16 15:12:36.765135894 +0000 UTC m=+1182.456804943" watchObservedRunningTime="2026-02-16 15:12:36.801220388 +0000 UTC m=+1182.492889437" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.805281 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnvsg\" (UniqueName: \"kubernetes.io/projected/37fffb84-58d3-4922-b28c-d85aa6986ce7-kube-api-access-rnvsg\") pod \"dnsmasq-dns-698758b865-9np95\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.827673 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:36 crc kubenswrapper[4748]: I0216 15:12:36.837132 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.259846 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.383961 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-dns-svc\") pod \"f3015788-8e77-4cad-9a4f-e7fdf877d085\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.384344 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-ovsdbserver-sb\") pod \"f3015788-8e77-4cad-9a4f-e7fdf877d085\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.384467 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-config\") pod \"f3015788-8e77-4cad-9a4f-e7fdf877d085\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.384575 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdrtl\" (UniqueName: \"kubernetes.io/projected/f3015788-8e77-4cad-9a4f-e7fdf877d085-kube-api-access-cdrtl\") pod \"f3015788-8e77-4cad-9a4f-e7fdf877d085\" (UID: \"f3015788-8e77-4cad-9a4f-e7fdf877d085\") " Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.396053 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3015788-8e77-4cad-9a4f-e7fdf877d085-kube-api-access-cdrtl" (OuterVolumeSpecName: "kube-api-access-cdrtl") pod "f3015788-8e77-4cad-9a4f-e7fdf877d085" (UID: "f3015788-8e77-4cad-9a4f-e7fdf877d085"). InnerVolumeSpecName "kube-api-access-cdrtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.438701 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-config" (OuterVolumeSpecName: "config") pod "f3015788-8e77-4cad-9a4f-e7fdf877d085" (UID: "f3015788-8e77-4cad-9a4f-e7fdf877d085"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.445873 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f3015788-8e77-4cad-9a4f-e7fdf877d085" (UID: "f3015788-8e77-4cad-9a4f-e7fdf877d085"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.450173 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f3015788-8e77-4cad-9a4f-e7fdf877d085" (UID: "f3015788-8e77-4cad-9a4f-e7fdf877d085"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.486971 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.487004 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.487015 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdrtl\" (UniqueName: \"kubernetes.io/projected/f3015788-8e77-4cad-9a4f-e7fdf877d085-kube-api-access-cdrtl\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.487025 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f3015788-8e77-4cad-9a4f-e7fdf877d085-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.629542 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:12:37 crc kubenswrapper[4748]: E0216 15:12:37.634172 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerName="dnsmasq-dns" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.634199 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerName="dnsmasq-dns" Feb 16 15:12:37 crc kubenswrapper[4748]: E0216 15:12:37.634218 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerName="init" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.634224 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerName="init" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.634423 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3015788-8e77-4cad-9a4f-e7fdf877d085" containerName="dnsmasq-dns" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.663398 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.670434 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.674346 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.674608 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.674925 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mwsfm" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.691053 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-lock\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.691116 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7q2x\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-kube-api-access-h7q2x\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.691163 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.691240 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.691391 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-cache\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.691445 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.704829 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.708901 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9np95"] Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.737671 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.739075 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" event={"ID":"f3015788-8e77-4cad-9a4f-e7fdf877d085","Type":"ContainerDied","Data":"e116b75f084e3d52b4e7fb8b21a9dec1599b82b60ca2aee3d869f0d857887577"} Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.739149 4748 scope.go:117] "RemoveContainer" containerID="363a66b8f128d6cb639dd5bc02b9989a34328c1f65db9c6a2db66ae431a20732" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.739369 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hncsb" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.743091 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9np95" event={"ID":"37fffb84-58d3-4922-b28c-d85aa6986ce7","Type":"ContainerStarted","Data":"0887990cea8f51996666db9286acba473f7435221d4026c87bfbc4dbdba5feb4"} Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.794924 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.795040 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-lock\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.795061 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7q2x\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-kube-api-access-h7q2x\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.795086 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.795118 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.795223 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-cache\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: E0216 15:12:37.796072 4748 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:12:37 crc kubenswrapper[4748]: E0216 15:12:37.796096 4748 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:12:37 crc kubenswrapper[4748]: E0216 15:12:37.796834 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift podName:3c460294-3cc7-4770-9a8a-0bd7c2b8fad2 nodeName:}" failed. No retries permitted until 2026-02-16 15:12:38.296814988 +0000 UTC m=+1183.988484027 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift") pod "swift-storage-0" (UID: "3c460294-3cc7-4770-9a8a-0bd7c2b8fad2") : configmap "swift-ring-files" not found Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.797449 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-cache\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.797890 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-lock\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.801685 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.801730 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/886c8fd5a7e8711ac2ffd5f6f108f099b435eeb8c11bb001897346db01f7854d/globalmount\"" pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.803752 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.823054 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7q2x\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-kube-api-access-h7q2x\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.867620 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-5d2fd778-c14e-4c1e-ac28-0b6996a2e049\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.901663 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.926145 4748 scope.go:117] "RemoveContainer" containerID="89e1d30c886fa4fa38ab1f8757036f9dcd2c8f1019761bf67eed5a086c55a611" Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.959939 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hncsb"] Feb 16 15:12:37 crc kubenswrapper[4748]: I0216 15:12:37.971065 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hncsb"] Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.026218 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-vwd9b" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.030463 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.217622 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-7lv6s" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.219529 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-tpchh"] Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.221124 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.224860 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.225175 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.228753 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.239266 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tpchh"] Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.304520 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:38 crc kubenswrapper[4748]: E0216 15:12:38.305460 4748 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:12:38 crc kubenswrapper[4748]: E0216 15:12:38.305477 4748 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:12:38 crc kubenswrapper[4748]: E0216 15:12:38.305775 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift podName:3c460294-3cc7-4770-9a8a-0bd7c2b8fad2 nodeName:}" failed. No retries permitted until 2026-02-16 15:12:39.305758116 +0000 UTC m=+1184.997427155 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift") pod "swift-storage-0" (UID: "3c460294-3cc7-4770-9a8a-0bd7c2b8fad2") : configmap "swift-ring-files" not found Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.377708 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.407357 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-swiftconf\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.407686 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-dispersionconf\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.407764 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-ring-data-devices\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.407849 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/87853597-3b96-46e9-803b-ce992b010f0b-etc-swift\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.407897 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-scripts\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.407911 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-combined-ca-bundle\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.407995 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjqr\" (UniqueName: \"kubernetes.io/projected/87853597-3b96-46e9-803b-ce992b010f0b-kube-api-access-hjjqr\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.509220 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-scripts\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.509265 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-combined-ca-bundle\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.509335 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjqr\" (UniqueName: \"kubernetes.io/projected/87853597-3b96-46e9-803b-ce992b010f0b-kube-api-access-hjjqr\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.509365 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-swiftconf\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.509385 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-dispersionconf\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.509436 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-ring-data-devices\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.509489 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/87853597-3b96-46e9-803b-ce992b010f0b-etc-swift\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.510756 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-scripts\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.510754 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-ring-data-devices\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.511091 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/87853597-3b96-46e9-803b-ce992b010f0b-etc-swift\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.514810 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-swiftconf\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.515956 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-combined-ca-bundle\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.520172 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-dispersionconf\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.531739 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjqr\" (UniqueName: \"kubernetes.io/projected/87853597-3b96-46e9-803b-ce992b010f0b-kube-api-access-hjjqr\") pod \"swift-ring-rebalance-tpchh\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.546936 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.772448 4748 generic.go:334] "Generic (PLEG): container finished" podID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerID="63b5932c9884725c35e5016afa75ced61693f2e5442063a5294ccf5a024a1239" exitCode=0 Feb 16 15:12:38 crc kubenswrapper[4748]: I0216 15:12:38.772898 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9np95" event={"ID":"37fffb84-58d3-4922-b28c-d85aa6986ce7","Type":"ContainerDied","Data":"63b5932c9884725c35e5016afa75ced61693f2e5442063a5294ccf5a024a1239"} Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.010769 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3015788-8e77-4cad-9a4f-e7fdf877d085" path="/var/lib/kubelet/pods/f3015788-8e77-4cad-9a4f-e7fdf877d085/volumes" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.039773 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-tpchh"] Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.156583 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ecd4cdcd-6dc0-4bba-980e-019d6eae5251" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.278215 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.328692 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:39 crc kubenswrapper[4748]: E0216 15:12:39.328902 4748 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:12:39 crc kubenswrapper[4748]: E0216 15:12:39.328918 4748 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:12:39 crc kubenswrapper[4748]: E0216 15:12:39.328963 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift podName:3c460294-3cc7-4770-9a8a-0bd7c2b8fad2 nodeName:}" failed. No retries permitted until 2026-02-16 15:12:41.328948862 +0000 UTC m=+1187.020617901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift") pod "swift-storage-0" (UID: "3c460294-3cc7-4770-9a8a-0bd7c2b8fad2") : configmap "swift-ring-files" not found Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.460764 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7024-account-create-update-k45xs"] Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.462037 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.465526 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.473810 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-kcdl5"] Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.475516 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.490760 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7024-account-create-update-k45xs"] Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.500793 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.530171 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kcdl5"] Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.545839 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f893b0c9-16a3-438e-9b07-6043dece0637-operator-scripts\") pod \"glance-7024-account-create-update-k45xs\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.547769 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8xt\" (UniqueName: \"kubernetes.io/projected/f893b0c9-16a3-438e-9b07-6043dece0637-kube-api-access-jv8xt\") pod \"glance-7024-account-create-update-k45xs\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.653772 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f893b0c9-16a3-438e-9b07-6043dece0637-operator-scripts\") pod \"glance-7024-account-create-update-k45xs\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.654259 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa94068-22b9-4a72-9ad0-66b48c0487bf-operator-scripts\") pod \"glance-db-create-kcdl5\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.654468 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv8xt\" (UniqueName: \"kubernetes.io/projected/f893b0c9-16a3-438e-9b07-6043dece0637-kube-api-access-jv8xt\") pod \"glance-7024-account-create-update-k45xs\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.654661 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc427\" (UniqueName: \"kubernetes.io/projected/3fa94068-22b9-4a72-9ad0-66b48c0487bf-kube-api-access-sc427\") pod \"glance-db-create-kcdl5\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.655593 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f893b0c9-16a3-438e-9b07-6043dece0637-operator-scripts\") pod \"glance-7024-account-create-update-k45xs\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.695464 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv8xt\" (UniqueName: \"kubernetes.io/projected/f893b0c9-16a3-438e-9b07-6043dece0637-kube-api-access-jv8xt\") pod \"glance-7024-account-create-update-k45xs\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.759879 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc427\" (UniqueName: \"kubernetes.io/projected/3fa94068-22b9-4a72-9ad0-66b48c0487bf-kube-api-access-sc427\") pod \"glance-db-create-kcdl5\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.760000 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa94068-22b9-4a72-9ad0-66b48c0487bf-operator-scripts\") pod \"glance-db-create-kcdl5\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.760869 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa94068-22b9-4a72-9ad0-66b48c0487bf-operator-scripts\") pod \"glance-db-create-kcdl5\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.793216 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc427\" (UniqueName: \"kubernetes.io/projected/3fa94068-22b9-4a72-9ad0-66b48c0487bf-kube-api-access-sc427\") pod \"glance-db-create-kcdl5\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.802289 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.819906 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9np95" event={"ID":"37fffb84-58d3-4922-b28c-d85aa6986ce7","Type":"ContainerStarted","Data":"7ef87fe3e16da4a50f9c37091c0df5170bdbf7fa4410abe9506a493923366301"} Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.821225 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.822412 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tpchh" event={"ID":"87853597-3b96-46e9-803b-ce992b010f0b","Type":"ContainerStarted","Data":"88f20d76f4a459f57dda1deb5e2191afbb6a71fe93d87d21babf930d975c5626"} Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.831444 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:39 crc kubenswrapper[4748]: I0216 15:12:39.857252 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-9np95" podStartSLOduration=3.857232094 podStartE2EDuration="3.857232094s" podCreationTimestamp="2026-02-16 15:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:39.853959143 +0000 UTC m=+1185.545628182" watchObservedRunningTime="2026-02-16 15:12:39.857232094 +0000 UTC m=+1185.548901133" Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.467870 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7024-account-create-update-k45xs"] Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.614024 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-kcdl5"] Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.833516 4748 generic.go:334] "Generic (PLEG): container finished" podID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerID="f38afe69bdd0099b17b3cd0853c0364ce1d1e25cd581aff8af825aca46249b6c" exitCode=0 Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.833613 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerDied","Data":"f38afe69bdd0099b17b3cd0853c0364ce1d1e25cd581aff8af825aca46249b6c"} Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.842197 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7024-account-create-update-k45xs" event={"ID":"f893b0c9-16a3-438e-9b07-6043dece0637","Type":"ContainerStarted","Data":"60f9dc1b2e6eaa913acad2de5928e160bc0a842650f2085796d9ae296f3446d4"} Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.842298 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7024-account-create-update-k45xs" event={"ID":"f893b0c9-16a3-438e-9b07-6043dece0637","Type":"ContainerStarted","Data":"e749b7580daef89649fd91ba701eecf38f50cbeec633cfee9c25258d4abfcdb4"} Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.848823 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kcdl5" event={"ID":"3fa94068-22b9-4a72-9ad0-66b48c0487bf","Type":"ContainerStarted","Data":"49abcd050b3cdf20dc27fa3c0f62b3a801fcbfc00d1696154a01d0dc71a562d4"} Feb 16 15:12:40 crc kubenswrapper[4748]: I0216 15:12:40.882461 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7024-account-create-update-k45xs" podStartSLOduration=1.8824395680000001 podStartE2EDuration="1.882439568s" podCreationTimestamp="2026-02-16 15:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:40.880950102 +0000 UTC m=+1186.572619141" watchObservedRunningTime="2026-02-16 15:12:40.882439568 +0000 UTC m=+1186.574108607" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.035823 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2mzxg"] Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.037272 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.044199 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.069214 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2mzxg"] Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.200776 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1699454f-b6db-4d3f-85f2-8bbf43441f26-operator-scripts\") pod \"root-account-create-update-2mzxg\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.200988 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrhgz\" (UniqueName: \"kubernetes.io/projected/1699454f-b6db-4d3f-85f2-8bbf43441f26-kube-api-access-xrhgz\") pod \"root-account-create-update-2mzxg\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.303067 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrhgz\" (UniqueName: \"kubernetes.io/projected/1699454f-b6db-4d3f-85f2-8bbf43441f26-kube-api-access-xrhgz\") pod \"root-account-create-update-2mzxg\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.303522 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1699454f-b6db-4d3f-85f2-8bbf43441f26-operator-scripts\") pod \"root-account-create-update-2mzxg\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.304667 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1699454f-b6db-4d3f-85f2-8bbf43441f26-operator-scripts\") pod \"root-account-create-update-2mzxg\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.340449 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrhgz\" (UniqueName: \"kubernetes.io/projected/1699454f-b6db-4d3f-85f2-8bbf43441f26-kube-api-access-xrhgz\") pod \"root-account-create-update-2mzxg\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.385463 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.406349 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:41 crc kubenswrapper[4748]: E0216 15:12:41.406590 4748 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:12:41 crc kubenswrapper[4748]: E0216 15:12:41.406983 4748 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:12:41 crc kubenswrapper[4748]: E0216 15:12:41.407055 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift podName:3c460294-3cc7-4770-9a8a-0bd7c2b8fad2 nodeName:}" failed. No retries permitted until 2026-02-16 15:12:45.407031549 +0000 UTC m=+1191.098700588 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift") pod "swift-storage-0" (UID: "3c460294-3cc7-4770-9a8a-0bd7c2b8fad2") : configmap "swift-ring-files" not found Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.860746 4748 generic.go:334] "Generic (PLEG): container finished" podID="3fa94068-22b9-4a72-9ad0-66b48c0487bf" containerID="028e8ce4ce984627df86a3f3d2187df9ae87bcbf58657e6f3f5978e38039cbda" exitCode=0 Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.860826 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kcdl5" event={"ID":"3fa94068-22b9-4a72-9ad0-66b48c0487bf","Type":"ContainerDied","Data":"028e8ce4ce984627df86a3f3d2187df9ae87bcbf58657e6f3f5978e38039cbda"} Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.863868 4748 generic.go:334] "Generic (PLEG): container finished" podID="f893b0c9-16a3-438e-9b07-6043dece0637" containerID="60f9dc1b2e6eaa913acad2de5928e160bc0a842650f2085796d9ae296f3446d4" exitCode=0 Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.864841 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7024-account-create-update-k45xs" event={"ID":"f893b0c9-16a3-438e-9b07-6043dece0637","Type":"ContainerDied","Data":"60f9dc1b2e6eaa913acad2de5928e160bc0a842650f2085796d9ae296f3446d4"} Feb 16 15:12:41 crc kubenswrapper[4748]: I0216 15:12:41.940955 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:42 crc kubenswrapper[4748]: I0216 15:12:42.019345 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:42 crc kubenswrapper[4748]: I0216 15:12:42.190572 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwfx"] Feb 16 15:12:43 crc kubenswrapper[4748]: I0216 15:12:43.887470 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lbwfx" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="registry-server" containerID="cri-o://affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f" gracePeriod=2 Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.327070 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.332563 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.476220 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.480445 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f893b0c9-16a3-438e-9b07-6043dece0637-operator-scripts\") pod \"f893b0c9-16a3-438e-9b07-6043dece0637\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.480518 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv8xt\" (UniqueName: \"kubernetes.io/projected/f893b0c9-16a3-438e-9b07-6043dece0637-kube-api-access-jv8xt\") pod \"f893b0c9-16a3-438e-9b07-6043dece0637\" (UID: \"f893b0c9-16a3-438e-9b07-6043dece0637\") " Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.480638 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa94068-22b9-4a72-9ad0-66b48c0487bf-operator-scripts\") pod \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.480735 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc427\" (UniqueName: \"kubernetes.io/projected/3fa94068-22b9-4a72-9ad0-66b48c0487bf-kube-api-access-sc427\") pod \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\" (UID: \"3fa94068-22b9-4a72-9ad0-66b48c0487bf\") " Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.481304 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fa94068-22b9-4a72-9ad0-66b48c0487bf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fa94068-22b9-4a72-9ad0-66b48c0487bf" (UID: "3fa94068-22b9-4a72-9ad0-66b48c0487bf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.481366 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f893b0c9-16a3-438e-9b07-6043dece0637-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f893b0c9-16a3-438e-9b07-6043dece0637" (UID: "f893b0c9-16a3-438e-9b07-6043dece0637"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.487598 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f893b0c9-16a3-438e-9b07-6043dece0637-kube-api-access-jv8xt" (OuterVolumeSpecName: "kube-api-access-jv8xt") pod "f893b0c9-16a3-438e-9b07-6043dece0637" (UID: "f893b0c9-16a3-438e-9b07-6043dece0637"). InnerVolumeSpecName "kube-api-access-jv8xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.489633 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fa94068-22b9-4a72-9ad0-66b48c0487bf-kube-api-access-sc427" (OuterVolumeSpecName: "kube-api-access-sc427") pod "3fa94068-22b9-4a72-9ad0-66b48c0487bf" (UID: "3fa94068-22b9-4a72-9ad0-66b48c0487bf"). InnerVolumeSpecName "kube-api-access-sc427". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.582342 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-utilities\") pod \"062968b6-bfb6-4065-84b8-63521623319e\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.582473 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q44j4\" (UniqueName: \"kubernetes.io/projected/062968b6-bfb6-4065-84b8-63521623319e-kube-api-access-q44j4\") pod \"062968b6-bfb6-4065-84b8-63521623319e\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.582634 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-catalog-content\") pod \"062968b6-bfb6-4065-84b8-63521623319e\" (UID: \"062968b6-bfb6-4065-84b8-63521623319e\") " Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.583063 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fa94068-22b9-4a72-9ad0-66b48c0487bf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.583081 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc427\" (UniqueName: \"kubernetes.io/projected/3fa94068-22b9-4a72-9ad0-66b48c0487bf-kube-api-access-sc427\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.583091 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f893b0c9-16a3-438e-9b07-6043dece0637-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.583102 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv8xt\" (UniqueName: \"kubernetes.io/projected/f893b0c9-16a3-438e-9b07-6043dece0637-kube-api-access-jv8xt\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.583324 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-utilities" (OuterVolumeSpecName: "utilities") pod "062968b6-bfb6-4065-84b8-63521623319e" (UID: "062968b6-bfb6-4065-84b8-63521623319e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.586793 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/062968b6-bfb6-4065-84b8-63521623319e-kube-api-access-q44j4" (OuterVolumeSpecName: "kube-api-access-q44j4") pod "062968b6-bfb6-4065-84b8-63521623319e" (UID: "062968b6-bfb6-4065-84b8-63521623319e"). InnerVolumeSpecName "kube-api-access-q44j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.608534 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "062968b6-bfb6-4065-84b8-63521623319e" (UID: "062968b6-bfb6-4065-84b8-63521623319e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.670554 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2mzxg"] Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.684685 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.684748 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/062968b6-bfb6-4065-84b8-63521623319e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.684763 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q44j4\" (UniqueName: \"kubernetes.io/projected/062968b6-bfb6-4065-84b8-63521623319e-kube-api-access-q44j4\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:44 crc kubenswrapper[4748]: W0216 15:12:44.692868 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1699454f_b6db_4d3f_85f2_8bbf43441f26.slice/crio-d787678cd7c9c9c0295b326ce0297bf2acfde1953252943ed43a2f735d0f505d WatchSource:0}: Error finding container d787678cd7c9c9c0295b326ce0297bf2acfde1953252943ed43a2f735d0f505d: Status 404 returned error can't find the container with id d787678cd7c9c9c0295b326ce0297bf2acfde1953252943ed43a2f735d0f505d Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.904867 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tpchh" event={"ID":"87853597-3b96-46e9-803b-ce992b010f0b","Type":"ContainerStarted","Data":"dc58b4452c3566fb534d5f051a4d2fa520ab35cf6d2cbb958a6e40eacb961b72"} Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.907820 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2mzxg" event={"ID":"1699454f-b6db-4d3f-85f2-8bbf43441f26","Type":"ContainerStarted","Data":"ee1a1deca949716e29ed0e0eec7f31924e160960019e8075e6883ef2166dbf28"} Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.908275 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2mzxg" event={"ID":"1699454f-b6db-4d3f-85f2-8bbf43441f26","Type":"ContainerStarted","Data":"d787678cd7c9c9c0295b326ce0297bf2acfde1953252943ed43a2f735d0f505d"} Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.910588 4748 generic.go:334] "Generic (PLEG): container finished" podID="062968b6-bfb6-4065-84b8-63521623319e" containerID="affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f" exitCode=0 Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.910729 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwfx" event={"ID":"062968b6-bfb6-4065-84b8-63521623319e","Type":"ContainerDied","Data":"affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f"} Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.910816 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lbwfx" event={"ID":"062968b6-bfb6-4065-84b8-63521623319e","Type":"ContainerDied","Data":"ba791ca8271548a5f663e4eb4671eb74fd3c98d8e10bd921fcc4a13e70c34d60"} Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.910891 4748 scope.go:117] "RemoveContainer" containerID="affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.911060 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lbwfx" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.918363 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7024-account-create-update-k45xs" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.919459 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7024-account-create-update-k45xs" event={"ID":"f893b0c9-16a3-438e-9b07-6043dece0637","Type":"ContainerDied","Data":"e749b7580daef89649fd91ba701eecf38f50cbeec633cfee9c25258d4abfcdb4"} Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.919560 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e749b7580daef89649fd91ba701eecf38f50cbeec633cfee9c25258d4abfcdb4" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.924171 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-kcdl5" event={"ID":"3fa94068-22b9-4a72-9ad0-66b48c0487bf","Type":"ContainerDied","Data":"49abcd050b3cdf20dc27fa3c0f62b3a801fcbfc00d1696154a01d0dc71a562d4"} Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.930772 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49abcd050b3cdf20dc27fa3c0f62b3a801fcbfc00d1696154a01d0dc71a562d4" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.929063 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-kcdl5" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.939864 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-tpchh" podStartSLOduration=1.9350747510000001 podStartE2EDuration="6.939835415s" podCreationTimestamp="2026-02-16 15:12:38 +0000 UTC" firstStartedPulling="2026-02-16 15:12:39.057100932 +0000 UTC m=+1184.748769971" lastFinishedPulling="2026-02-16 15:12:44.061861596 +0000 UTC m=+1189.753530635" observedRunningTime="2026-02-16 15:12:44.927629186 +0000 UTC m=+1190.619298235" watchObservedRunningTime="2026-02-16 15:12:44.939835415 +0000 UTC m=+1190.631504664" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.950701 4748 scope.go:117] "RemoveContainer" containerID="dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.959929 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2mzxg" podStartSLOduration=3.959910707 podStartE2EDuration="3.959910707s" podCreationTimestamp="2026-02-16 15:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:44.947104313 +0000 UTC m=+1190.638773352" watchObservedRunningTime="2026-02-16 15:12:44.959910707 +0000 UTC m=+1190.651579746" Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.985598 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwfx"] Feb 16 15:12:44 crc kubenswrapper[4748]: I0216 15:12:44.991591 4748 scope.go:117] "RemoveContainer" containerID="406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.009283 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lbwfx"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.053772 4748 scope.go:117] "RemoveContainer" containerID="affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.058201 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f\": container with ID starting with affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f not found: ID does not exist" containerID="affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.058261 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f"} err="failed to get container status \"affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f\": rpc error: code = NotFound desc = could not find container \"affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f\": container with ID starting with affb7a9a564fdb7b43fdff5fdbb1c4ae7ffe682c0c477548808c61797120c06f not found: ID does not exist" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.058294 4748 scope.go:117] "RemoveContainer" containerID="dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.061881 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b\": container with ID starting with dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b not found: ID does not exist" containerID="dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.062063 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b"} err="failed to get container status \"dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b\": rpc error: code = NotFound desc = could not find container \"dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b\": container with ID starting with dc7d61ad2b378ca2a1eb35e6467f86dc787b76927a93106f9c55e8ab67ba338b not found: ID does not exist" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.062177 4748 scope.go:117] "RemoveContainer" containerID="406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.062775 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9\": container with ID starting with 406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9 not found: ID does not exist" containerID="406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.062900 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9"} err="failed to get container status \"406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9\": rpc error: code = NotFound desc = could not find container \"406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9\": container with ID starting with 406fe8efda155bdfdeaf429c3658109ba8ea5b10d5b708dc4cf2accbcd8e32c9 not found: ID does not exist" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.104319 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-thj9q"] Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.104959 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="registry-server" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.105048 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="registry-server" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.105107 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="extract-content" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.105171 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="extract-content" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.105261 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="extract-utilities" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.105325 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="extract-utilities" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.105390 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fa94068-22b9-4a72-9ad0-66b48c0487bf" containerName="mariadb-database-create" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.105451 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fa94068-22b9-4a72-9ad0-66b48c0487bf" containerName="mariadb-database-create" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.105522 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f893b0c9-16a3-438e-9b07-6043dece0637" containerName="mariadb-account-create-update" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.105587 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f893b0c9-16a3-438e-9b07-6043dece0637" containerName="mariadb-account-create-update" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.105854 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f893b0c9-16a3-438e-9b07-6043dece0637" containerName="mariadb-account-create-update" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.105947 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="062968b6-bfb6-4065-84b8-63521623319e" containerName="registry-server" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.106009 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fa94068-22b9-4a72-9ad0-66b48c0487bf" containerName="mariadb-database-create" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.106813 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.129503 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-thj9q"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.194745 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/364079ae-4285-4259-8cc2-59fe99051ee9-operator-scripts\") pod \"keystone-db-create-thj9q\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.195067 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsm44\" (UniqueName: \"kubernetes.io/projected/364079ae-4285-4259-8cc2-59fe99051ee9-kube-api-access-rsm44\") pod \"keystone-db-create-thj9q\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.216373 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-8e95-account-create-update-zcbxs"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.217537 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.220255 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.224834 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8e95-account-create-update-zcbxs"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.297013 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rsm44\" (UniqueName: \"kubernetes.io/projected/364079ae-4285-4259-8cc2-59fe99051ee9-kube-api-access-rsm44\") pod \"keystone-db-create-thj9q\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.297170 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94m8m\" (UniqueName: \"kubernetes.io/projected/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-kube-api-access-94m8m\") pod \"keystone-8e95-account-create-update-zcbxs\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.297506 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/364079ae-4285-4259-8cc2-59fe99051ee9-operator-scripts\") pod \"keystone-db-create-thj9q\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.297779 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-operator-scripts\") pod \"keystone-8e95-account-create-update-zcbxs\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.298991 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/364079ae-4285-4259-8cc2-59fe99051ee9-operator-scripts\") pod \"keystone-db-create-thj9q\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.334521 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rsm44\" (UniqueName: \"kubernetes.io/projected/364079ae-4285-4259-8cc2-59fe99051ee9-kube-api-access-rsm44\") pod \"keystone-db-create-thj9q\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.371549 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gt8g9"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.373426 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.397539 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gt8g9"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.399694 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-operator-scripts\") pod \"keystone-8e95-account-create-update-zcbxs\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.399823 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94m8m\" (UniqueName: \"kubernetes.io/projected/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-kube-api-access-94m8m\") pod \"keystone-8e95-account-create-update-zcbxs\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.400746 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-operator-scripts\") pod \"keystone-8e95-account-create-update-zcbxs\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.422626 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94m8m\" (UniqueName: \"kubernetes.io/projected/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-kube-api-access-94m8m\") pod \"keystone-8e95-account-create-update-zcbxs\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.431582 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.501367 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-operator-scripts\") pod \"placement-db-create-gt8g9\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.501424 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.501526 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtq4\" (UniqueName: \"kubernetes.io/projected/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-kube-api-access-7mtq4\") pod \"placement-db-create-gt8g9\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.501721 4748 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.501739 4748 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 16 15:12:45 crc kubenswrapper[4748]: E0216 15:12:45.501784 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift podName:3c460294-3cc7-4770-9a8a-0bd7c2b8fad2 nodeName:}" failed. No retries permitted until 2026-02-16 15:12:53.501761821 +0000 UTC m=+1199.193430860 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift") pod "swift-storage-0" (UID: "3c460294-3cc7-4770-9a8a-0bd7c2b8fad2") : configmap "swift-ring-files" not found Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.527618 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ec90-account-create-update-pqx92"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.529648 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.547251 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.547656 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.606064 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cslm\" (UniqueName: \"kubernetes.io/projected/cae5ce95-90b8-45f3-90a6-08b958802299-kube-api-access-7cslm\") pod \"placement-ec90-account-create-update-pqx92\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.606628 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mtq4\" (UniqueName: \"kubernetes.io/projected/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-kube-api-access-7mtq4\") pod \"placement-db-create-gt8g9\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.606764 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae5ce95-90b8-45f3-90a6-08b958802299-operator-scripts\") pod \"placement-ec90-account-create-update-pqx92\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.606815 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-operator-scripts\") pod \"placement-db-create-gt8g9\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.607637 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-operator-scripts\") pod \"placement-db-create-gt8g9\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.680965 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mtq4\" (UniqueName: \"kubernetes.io/projected/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-kube-api-access-7mtq4\") pod \"placement-db-create-gt8g9\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.696105 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec90-account-create-update-pqx92"] Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.710816 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae5ce95-90b8-45f3-90a6-08b958802299-operator-scripts\") pod \"placement-ec90-account-create-update-pqx92\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.710939 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cslm\" (UniqueName: \"kubernetes.io/projected/cae5ce95-90b8-45f3-90a6-08b958802299-kube-api-access-7cslm\") pod \"placement-ec90-account-create-update-pqx92\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.716016 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae5ce95-90b8-45f3-90a6-08b958802299-operator-scripts\") pod \"placement-ec90-account-create-update-pqx92\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.742276 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cslm\" (UniqueName: \"kubernetes.io/projected/cae5ce95-90b8-45f3-90a6-08b958802299-kube-api-access-7cslm\") pod \"placement-ec90-account-create-update-pqx92\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.802051 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.870147 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.966241 4748 generic.go:334] "Generic (PLEG): container finished" podID="1699454f-b6db-4d3f-85f2-8bbf43441f26" containerID="ee1a1deca949716e29ed0e0eec7f31924e160960019e8075e6883ef2166dbf28" exitCode=0 Feb 16 15:12:45 crc kubenswrapper[4748]: I0216 15:12:45.966570 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2mzxg" event={"ID":"1699454f-b6db-4d3f-85f2-8bbf43441f26","Type":"ContainerDied","Data":"ee1a1deca949716e29ed0e0eec7f31924e160960019e8075e6883ef2166dbf28"} Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.215093 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-thj9q"] Feb 16 15:12:46 crc kubenswrapper[4748]: W0216 15:12:46.236984 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod364079ae_4285_4259_8cc2_59fe99051ee9.slice/crio-6546b0765469314e24b2abe6a01af0bf5a47526e4c186728c0f2b69bdb6dc952 WatchSource:0}: Error finding container 6546b0765469314e24b2abe6a01af0bf5a47526e4c186728c0f2b69bdb6dc952: Status 404 returned error can't find the container with id 6546b0765469314e24b2abe6a01af0bf5a47526e4c186728c0f2b69bdb6dc952 Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.360871 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-8e95-account-create-update-zcbxs"] Feb 16 15:12:46 crc kubenswrapper[4748]: W0216 15:12:46.376000 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46539f2e_bcce_4a2e_b62d_fea1cf34f2eb.slice/crio-9f8479d978700ad4ec523052ee231c28089c3830c788c04f5ec5235d37f52786 WatchSource:0}: Error finding container 9f8479d978700ad4ec523052ee231c28089c3830c788c04f5ec5235d37f52786: Status 404 returned error can't find the container with id 9f8479d978700ad4ec523052ee231c28089c3830c788c04f5ec5235d37f52786 Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.479154 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.506872 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gt8g9"] Feb 16 15:12:46 crc kubenswrapper[4748]: W0216 15:12:46.561491 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fa9e51e_99aa_4a87_be9a_6804a4bb3259.slice/crio-e0e8899b45f9161d683fbb67a1b947f4142dda0f5c363084b8ff44538a30ad28 WatchSource:0}: Error finding container e0e8899b45f9161d683fbb67a1b947f4142dda0f5c363084b8ff44538a30ad28: Status 404 returned error can't find the container with id e0e8899b45f9161d683fbb67a1b947f4142dda0f5c363084b8ff44538a30ad28 Feb 16 15:12:46 crc kubenswrapper[4748]: W0216 15:12:46.653389 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcae5ce95_90b8_45f3_90a6_08b958802299.slice/crio-e08afef52731cf07246978b08781095c4d88282faabf1db5d334a33f5dd4e758 WatchSource:0}: Error finding container e08afef52731cf07246978b08781095c4d88282faabf1db5d334a33f5dd4e758: Status 404 returned error can't find the container with id e08afef52731cf07246978b08781095c4d88282faabf1db5d334a33f5dd4e758 Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.655940 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ec90-account-create-update-pqx92"] Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.830115 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.899702 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dz5n4"] Feb 16 15:12:46 crc kubenswrapper[4748]: I0216 15:12:46.899996 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerName="dnsmasq-dns" containerID="cri-o://2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4" gracePeriod=10 Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.011484 4748 generic.go:334] "Generic (PLEG): container finished" podID="364079ae-4285-4259-8cc2-59fe99051ee9" containerID="934d29656fe2d31b40568f6621aee3dcbd434c869781df06f04eef2850d213d4" exitCode=0 Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.019856 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="062968b6-bfb6-4065-84b8-63521623319e" path="/var/lib/kubelet/pods/062968b6-bfb6-4065-84b8-63521623319e/volumes" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.020876 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-thj9q" event={"ID":"364079ae-4285-4259-8cc2-59fe99051ee9","Type":"ContainerDied","Data":"934d29656fe2d31b40568f6621aee3dcbd434c869781df06f04eef2850d213d4"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.020909 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-thj9q" event={"ID":"364079ae-4285-4259-8cc2-59fe99051ee9","Type":"ContainerStarted","Data":"6546b0765469314e24b2abe6a01af0bf5a47526e4c186728c0f2b69bdb6dc952"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.038395 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec90-account-create-update-pqx92" event={"ID":"cae5ce95-90b8-45f3-90a6-08b958802299","Type":"ContainerStarted","Data":"12407b8b22c7c3dfc26eec8b5b583ac2768d42c5fe19d45fd1021ec0336e4ae5"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.040498 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec90-account-create-update-pqx92" event={"ID":"cae5ce95-90b8-45f3-90a6-08b958802299","Type":"ContainerStarted","Data":"e08afef52731cf07246978b08781095c4d88282faabf1db5d334a33f5dd4e758"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.050346 4748 generic.go:334] "Generic (PLEG): container finished" podID="46539f2e-bcce-4a2e-b62d-fea1cf34f2eb" containerID="d219dbce367c55a26022e9a9e79570e1c4799d70af9d9c503989543cc41ca993" exitCode=0 Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.050445 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8e95-account-create-update-zcbxs" event={"ID":"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb","Type":"ContainerDied","Data":"d219dbce367c55a26022e9a9e79570e1c4799d70af9d9c503989543cc41ca993"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.050486 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8e95-account-create-update-zcbxs" event={"ID":"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb","Type":"ContainerStarted","Data":"9f8479d978700ad4ec523052ee231c28089c3830c788c04f5ec5235d37f52786"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.070765 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gt8g9" event={"ID":"4fa9e51e-99aa-4a87-be9a-6804a4bb3259","Type":"ContainerStarted","Data":"a37ff9ea47a10cd8c14c0c8fcee61b78f42d2668a283d6d23dce4a36b7705c44"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.070808 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gt8g9" event={"ID":"4fa9e51e-99aa-4a87-be9a-6804a4bb3259","Type":"ContainerStarted","Data":"e0e8899b45f9161d683fbb67a1b947f4142dda0f5c363084b8ff44538a30ad28"} Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.073658 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ec90-account-create-update-pqx92" podStartSLOduration=2.073636398 podStartE2EDuration="2.073636398s" podCreationTimestamp="2026-02-16 15:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:47.061784598 +0000 UTC m=+1192.753453637" watchObservedRunningTime="2026-02-16 15:12:47.073636398 +0000 UTC m=+1192.765305437" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.677536 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.739237 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.769030 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvdt4\" (UniqueName: \"kubernetes.io/projected/01a9253f-ccde-45fd-801d-78e2ce029d23-kube-api-access-bvdt4\") pod \"01a9253f-ccde-45fd-801d-78e2ce029d23\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.769136 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrhgz\" (UniqueName: \"kubernetes.io/projected/1699454f-b6db-4d3f-85f2-8bbf43441f26-kube-api-access-xrhgz\") pod \"1699454f-b6db-4d3f-85f2-8bbf43441f26\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.776007 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-config\") pod \"01a9253f-ccde-45fd-801d-78e2ce029d23\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.776207 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1699454f-b6db-4d3f-85f2-8bbf43441f26-operator-scripts\") pod \"1699454f-b6db-4d3f-85f2-8bbf43441f26\" (UID: \"1699454f-b6db-4d3f-85f2-8bbf43441f26\") " Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.776239 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-dns-svc\") pod \"01a9253f-ccde-45fd-801d-78e2ce029d23\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.776269 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-sb\") pod \"01a9253f-ccde-45fd-801d-78e2ce029d23\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.776311 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-nb\") pod \"01a9253f-ccde-45fd-801d-78e2ce029d23\" (UID: \"01a9253f-ccde-45fd-801d-78e2ce029d23\") " Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.779120 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01a9253f-ccde-45fd-801d-78e2ce029d23-kube-api-access-bvdt4" (OuterVolumeSpecName: "kube-api-access-bvdt4") pod "01a9253f-ccde-45fd-801d-78e2ce029d23" (UID: "01a9253f-ccde-45fd-801d-78e2ce029d23"). InnerVolumeSpecName "kube-api-access-bvdt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.779881 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1699454f-b6db-4d3f-85f2-8bbf43441f26-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1699454f-b6db-4d3f-85f2-8bbf43441f26" (UID: "1699454f-b6db-4d3f-85f2-8bbf43441f26"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.784005 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1699454f-b6db-4d3f-85f2-8bbf43441f26-kube-api-access-xrhgz" (OuterVolumeSpecName: "kube-api-access-xrhgz") pod "1699454f-b6db-4d3f-85f2-8bbf43441f26" (UID: "1699454f-b6db-4d3f-85f2-8bbf43441f26"). InnerVolumeSpecName "kube-api-access-xrhgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.840017 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-config" (OuterVolumeSpecName: "config") pod "01a9253f-ccde-45fd-801d-78e2ce029d23" (UID: "01a9253f-ccde-45fd-801d-78e2ce029d23"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.861331 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "01a9253f-ccde-45fd-801d-78e2ce029d23" (UID: "01a9253f-ccde-45fd-801d-78e2ce029d23"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.879101 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.879128 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvdt4\" (UniqueName: \"kubernetes.io/projected/01a9253f-ccde-45fd-801d-78e2ce029d23-kube-api-access-bvdt4\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.879141 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrhgz\" (UniqueName: \"kubernetes.io/projected/1699454f-b6db-4d3f-85f2-8bbf43441f26-kube-api-access-xrhgz\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.879151 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.879160 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1699454f-b6db-4d3f-85f2-8bbf43441f26-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.879793 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "01a9253f-ccde-45fd-801d-78e2ce029d23" (UID: "01a9253f-ccde-45fd-801d-78e2ce029d23"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.881780 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "01a9253f-ccde-45fd-801d-78e2ce029d23" (UID: "01a9253f-ccde-45fd-801d-78e2ce029d23"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.981673 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:47 crc kubenswrapper[4748]: I0216 15:12:47.981707 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/01a9253f-ccde-45fd-801d-78e2ce029d23-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.089934 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2mzxg" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.089956 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2mzxg" event={"ID":"1699454f-b6db-4d3f-85f2-8bbf43441f26","Type":"ContainerDied","Data":"d787678cd7c9c9c0295b326ce0297bf2acfde1953252943ed43a2f735d0f505d"} Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.089994 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d787678cd7c9c9c0295b326ce0297bf2acfde1953252943ed43a2f735d0f505d" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.095534 4748 generic.go:334] "Generic (PLEG): container finished" podID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerID="2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4" exitCode=0 Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.095567 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.095603 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" event={"ID":"01a9253f-ccde-45fd-801d-78e2ce029d23","Type":"ContainerDied","Data":"2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4"} Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.095639 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" event={"ID":"01a9253f-ccde-45fd-801d-78e2ce029d23","Type":"ContainerDied","Data":"71d27d4da6ea4e94c63116e97b25c305933dbfeadd85e481955e5022db0960a3"} Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.095674 4748 scope.go:117] "RemoveContainer" containerID="2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.098409 4748 generic.go:334] "Generic (PLEG): container finished" podID="4fa9e51e-99aa-4a87-be9a-6804a4bb3259" containerID="a37ff9ea47a10cd8c14c0c8fcee61b78f42d2668a283d6d23dce4a36b7705c44" exitCode=0 Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.098460 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gt8g9" event={"ID":"4fa9e51e-99aa-4a87-be9a-6804a4bb3259","Type":"ContainerDied","Data":"a37ff9ea47a10cd8c14c0c8fcee61b78f42d2668a283d6d23dce4a36b7705c44"} Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.100918 4748 generic.go:334] "Generic (PLEG): container finished" podID="cae5ce95-90b8-45f3-90a6-08b958802299" containerID="12407b8b22c7c3dfc26eec8b5b583ac2768d42c5fe19d45fd1021ec0336e4ae5" exitCode=0 Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.101018 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec90-account-create-update-pqx92" event={"ID":"cae5ce95-90b8-45f3-90a6-08b958802299","Type":"ContainerDied","Data":"12407b8b22c7c3dfc26eec8b5b583ac2768d42c5fe19d45fd1021ec0336e4ae5"} Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.148547 4748 scope.go:117] "RemoveContainer" containerID="b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.160914 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dz5n4"] Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.168886 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-dz5n4"] Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.179371 4748 scope.go:117] "RemoveContainer" containerID="2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4" Feb 16 15:12:48 crc kubenswrapper[4748]: E0216 15:12:48.179991 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4\": container with ID starting with 2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4 not found: ID does not exist" containerID="2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.180027 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4"} err="failed to get container status \"2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4\": rpc error: code = NotFound desc = could not find container \"2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4\": container with ID starting with 2d2838b390ebf4cde88c604305915b86629027209e64337d63d1e06620b1dfe4 not found: ID does not exist" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.180062 4748 scope.go:117] "RemoveContainer" containerID="b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3" Feb 16 15:12:48 crc kubenswrapper[4748]: E0216 15:12:48.181931 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3\": container with ID starting with b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3 not found: ID does not exist" containerID="b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.181977 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3"} err="failed to get container status \"b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3\": rpc error: code = NotFound desc = could not find container \"b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3\": container with ID starting with b396980d793167bb5a251f3c2cf4900f6c90a50b4febe020a723d538c16426f3 not found: ID does not exist" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.673031 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.803280 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/364079ae-4285-4259-8cc2-59fe99051ee9-operator-scripts\") pod \"364079ae-4285-4259-8cc2-59fe99051ee9\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.803367 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsm44\" (UniqueName: \"kubernetes.io/projected/364079ae-4285-4259-8cc2-59fe99051ee9-kube-api-access-rsm44\") pod \"364079ae-4285-4259-8cc2-59fe99051ee9\" (UID: \"364079ae-4285-4259-8cc2-59fe99051ee9\") " Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.804659 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/364079ae-4285-4259-8cc2-59fe99051ee9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "364079ae-4285-4259-8cc2-59fe99051ee9" (UID: "364079ae-4285-4259-8cc2-59fe99051ee9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.805414 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/364079ae-4285-4259-8cc2-59fe99051ee9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.810547 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/364079ae-4285-4259-8cc2-59fe99051ee9-kube-api-access-rsm44" (OuterVolumeSpecName: "kube-api-access-rsm44") pod "364079ae-4285-4259-8cc2-59fe99051ee9" (UID: "364079ae-4285-4259-8cc2-59fe99051ee9"). InnerVolumeSpecName "kube-api-access-rsm44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.864373 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.869956 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.907299 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-operator-scripts\") pod \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.907454 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mtq4\" (UniqueName: \"kubernetes.io/projected/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-kube-api-access-7mtq4\") pod \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\" (UID: \"4fa9e51e-99aa-4a87-be9a-6804a4bb3259\") " Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.908006 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rsm44\" (UniqueName: \"kubernetes.io/projected/364079ae-4285-4259-8cc2-59fe99051ee9-kube-api-access-rsm44\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.913664 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4fa9e51e-99aa-4a87-be9a-6804a4bb3259" (UID: "4fa9e51e-99aa-4a87-be9a-6804a4bb3259"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:48 crc kubenswrapper[4748]: I0216 15:12:48.913874 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-kube-api-access-7mtq4" (OuterVolumeSpecName: "kube-api-access-7mtq4") pod "4fa9e51e-99aa-4a87-be9a-6804a4bb3259" (UID: "4fa9e51e-99aa-4a87-be9a-6804a4bb3259"). InnerVolumeSpecName "kube-api-access-7mtq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.008159 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" path="/var/lib/kubelet/pods/01a9253f-ccde-45fd-801d-78e2ce029d23/volumes" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.011869 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-operator-scripts\") pod \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.012195 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94m8m\" (UniqueName: \"kubernetes.io/projected/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-kube-api-access-94m8m\") pod \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\" (UID: \"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb\") " Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.013004 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "46539f2e-bcce-4a2e-b62d-fea1cf34f2eb" (UID: "46539f2e-bcce-4a2e-b62d-fea1cf34f2eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.013196 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.013601 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mtq4\" (UniqueName: \"kubernetes.io/projected/4fa9e51e-99aa-4a87-be9a-6804a4bb3259-kube-api-access-7mtq4\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.014852 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-kube-api-access-94m8m" (OuterVolumeSpecName: "kube-api-access-94m8m") pod "46539f2e-bcce-4a2e-b62d-fea1cf34f2eb" (UID: "46539f2e-bcce-4a2e-b62d-fea1cf34f2eb"). InnerVolumeSpecName "kube-api-access-94m8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.114888 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94m8m\" (UniqueName: \"kubernetes.io/projected/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-kube-api-access-94m8m\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.114921 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.115948 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gt8g9" event={"ID":"4fa9e51e-99aa-4a87-be9a-6804a4bb3259","Type":"ContainerDied","Data":"e0e8899b45f9161d683fbb67a1b947f4142dda0f5c363084b8ff44538a30ad28"} Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.115976 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e8899b45f9161d683fbb67a1b947f4142dda0f5c363084b8ff44538a30ad28" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.116032 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gt8g9" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.118000 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-thj9q" event={"ID":"364079ae-4285-4259-8cc2-59fe99051ee9","Type":"ContainerDied","Data":"6546b0765469314e24b2abe6a01af0bf5a47526e4c186728c0f2b69bdb6dc952"} Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.118021 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6546b0765469314e24b2abe6a01af0bf5a47526e4c186728c0f2b69bdb6dc952" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.118026 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-thj9q" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.120770 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-8e95-account-create-update-zcbxs" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.120783 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-8e95-account-create-update-zcbxs" event={"ID":"46539f2e-bcce-4a2e-b62d-fea1cf34f2eb","Type":"ContainerDied","Data":"9f8479d978700ad4ec523052ee231c28089c3830c788c04f5ec5235d37f52786"} Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.120804 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f8479d978700ad4ec523052ee231c28089c3830c788c04f5ec5235d37f52786" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.161004 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ecd4cdcd-6dc0-4bba-980e-019d6eae5251" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.356818 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.567538 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.625642 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae5ce95-90b8-45f3-90a6-08b958802299-operator-scripts\") pod \"cae5ce95-90b8-45f3-90a6-08b958802299\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.625745 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cslm\" (UniqueName: \"kubernetes.io/projected/cae5ce95-90b8-45f3-90a6-08b958802299-kube-api-access-7cslm\") pod \"cae5ce95-90b8-45f3-90a6-08b958802299\" (UID: \"cae5ce95-90b8-45f3-90a6-08b958802299\") " Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.626924 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cae5ce95-90b8-45f3-90a6-08b958802299-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cae5ce95-90b8-45f3-90a6-08b958802299" (UID: "cae5ce95-90b8-45f3-90a6-08b958802299"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.632593 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cae5ce95-90b8-45f3-90a6-08b958802299-kube-api-access-7cslm" (OuterVolumeSpecName: "kube-api-access-7cslm") pod "cae5ce95-90b8-45f3-90a6-08b958802299" (UID: "cae5ce95-90b8-45f3-90a6-08b958802299"). InnerVolumeSpecName "kube-api-access-7cslm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641127 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-b2zpx"] Feb 16 15:12:49 crc kubenswrapper[4748]: E0216 15:12:49.641843 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fa9e51e-99aa-4a87-be9a-6804a4bb3259" containerName="mariadb-database-create" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641865 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fa9e51e-99aa-4a87-be9a-6804a4bb3259" containerName="mariadb-database-create" Feb 16 15:12:49 crc kubenswrapper[4748]: E0216 15:12:49.641886 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1699454f-b6db-4d3f-85f2-8bbf43441f26" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641893 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="1699454f-b6db-4d3f-85f2-8bbf43441f26" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: E0216 15:12:49.641905 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="364079ae-4285-4259-8cc2-59fe99051ee9" containerName="mariadb-database-create" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641914 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="364079ae-4285-4259-8cc2-59fe99051ee9" containerName="mariadb-database-create" Feb 16 15:12:49 crc kubenswrapper[4748]: E0216 15:12:49.641927 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cae5ce95-90b8-45f3-90a6-08b958802299" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641934 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="cae5ce95-90b8-45f3-90a6-08b958802299" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: E0216 15:12:49.641945 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerName="init" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641952 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerName="init" Feb 16 15:12:49 crc kubenswrapper[4748]: E0216 15:12:49.641968 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46539f2e-bcce-4a2e-b62d-fea1cf34f2eb" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641975 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="46539f2e-bcce-4a2e-b62d-fea1cf34f2eb" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: E0216 15:12:49.641986 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerName="dnsmasq-dns" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.641992 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerName="dnsmasq-dns" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.642223 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="cae5ce95-90b8-45f3-90a6-08b958802299" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.642235 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="1699454f-b6db-4d3f-85f2-8bbf43441f26" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.642250 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="364079ae-4285-4259-8cc2-59fe99051ee9" containerName="mariadb-database-create" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.642267 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fa9e51e-99aa-4a87-be9a-6804a4bb3259" containerName="mariadb-database-create" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.642275 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerName="dnsmasq-dns" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.642283 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="46539f2e-bcce-4a2e-b62d-fea1cf34f2eb" containerName="mariadb-account-create-update" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.643332 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.647161 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.647257 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dql7h" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.653761 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b2zpx"] Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.728949 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-combined-ca-bundle\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.729104 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-db-sync-config-data\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.729181 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-config-data\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.729339 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4w2r\" (UniqueName: \"kubernetes.io/projected/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-kube-api-access-x4w2r\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.729395 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cae5ce95-90b8-45f3-90a6-08b958802299-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.729408 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cslm\" (UniqueName: \"kubernetes.io/projected/cae5ce95-90b8-45f3-90a6-08b958802299-kube-api-access-7cslm\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.833838 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-combined-ca-bundle\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.834208 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-db-sync-config-data\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.834258 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-config-data\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.834387 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4w2r\" (UniqueName: \"kubernetes.io/projected/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-kube-api-access-x4w2r\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.838118 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-db-sync-config-data\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.838183 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-combined-ca-bundle\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.839486 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-config-data\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:49 crc kubenswrapper[4748]: I0216 15:12:49.852393 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4w2r\" (UniqueName: \"kubernetes.io/projected/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-kube-api-access-x4w2r\") pod \"glance-db-sync-b2zpx\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:50 crc kubenswrapper[4748]: I0216 15:12:50.002386 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zpx" Feb 16 15:12:50 crc kubenswrapper[4748]: I0216 15:12:50.190459 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ec90-account-create-update-pqx92" event={"ID":"cae5ce95-90b8-45f3-90a6-08b958802299","Type":"ContainerDied","Data":"e08afef52731cf07246978b08781095c4d88282faabf1db5d334a33f5dd4e758"} Feb 16 15:12:50 crc kubenswrapper[4748]: I0216 15:12:50.190504 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e08afef52731cf07246978b08781095c4d88282faabf1db5d334a33f5dd4e758" Feb 16 15:12:50 crc kubenswrapper[4748]: I0216 15:12:50.190584 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ec90-account-create-update-pqx92" Feb 16 15:12:50 crc kubenswrapper[4748]: I0216 15:12:50.525520 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-b2zpx"] Feb 16 15:12:52 crc kubenswrapper[4748]: I0216 15:12:52.461459 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2mzxg"] Feb 16 15:12:52 crc kubenswrapper[4748]: I0216 15:12:52.470575 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2mzxg"] Feb 16 15:12:52 crc kubenswrapper[4748]: I0216 15:12:52.703756 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-dz5n4" podUID="01a9253f-ccde-45fd-801d-78e2ce029d23" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Feb 16 15:12:53 crc kubenswrapper[4748]: I0216 15:12:53.007456 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1699454f-b6db-4d3f-85f2-8bbf43441f26" path="/var/lib/kubelet/pods/1699454f-b6db-4d3f-85f2-8bbf43441f26/volumes" Feb 16 15:12:53 crc kubenswrapper[4748]: I0216 15:12:53.229835 4748 generic.go:334] "Generic (PLEG): container finished" podID="87853597-3b96-46e9-803b-ce992b010f0b" containerID="dc58b4452c3566fb534d5f051a4d2fa520ab35cf6d2cbb958a6e40eacb961b72" exitCode=0 Feb 16 15:12:53 crc kubenswrapper[4748]: I0216 15:12:53.229909 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tpchh" event={"ID":"87853597-3b96-46e9-803b-ce992b010f0b","Type":"ContainerDied","Data":"dc58b4452c3566fb534d5f051a4d2fa520ab35cf6d2cbb958a6e40eacb961b72"} Feb 16 15:12:53 crc kubenswrapper[4748]: I0216 15:12:53.559020 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:53 crc kubenswrapper[4748]: I0216 15:12:53.577941 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3c460294-3cc7-4770-9a8a-0bd7c2b8fad2-etc-swift\") pod \"swift-storage-0\" (UID: \"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2\") " pod="openstack/swift-storage-0" Feb 16 15:12:53 crc kubenswrapper[4748]: E0216 15:12:53.667338 4748 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda81a287_d981_4b30_8d23_70cbc085368e.slice/crio-conmon-21b566650946c3d07bae6095b600490cc14ece89ad79a10ca04fd9753a21ae71.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:12:53 crc kubenswrapper[4748]: I0216 15:12:53.833230 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 16 15:12:54 crc kubenswrapper[4748]: I0216 15:12:54.240286 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zpx" event={"ID":"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe","Type":"ContainerStarted","Data":"cb563fbd679c91529591fd04b9c99488e31cbb227f5cffa32fb45e826ec61182"} Feb 16 15:12:54 crc kubenswrapper[4748]: I0216 15:12:54.241631 4748 generic.go:334] "Generic (PLEG): container finished" podID="da81a287-d981-4b30-8d23-70cbc085368e" containerID="21b566650946c3d07bae6095b600490cc14ece89ad79a10ca04fd9753a21ae71" exitCode=0 Feb 16 15:12:54 crc kubenswrapper[4748]: I0216 15:12:54.241750 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da81a287-d981-4b30-8d23-70cbc085368e","Type":"ContainerDied","Data":"21b566650946c3d07bae6095b600490cc14ece89ad79a10ca04fd9753a21ae71"} Feb 16 15:12:54 crc kubenswrapper[4748]: I0216 15:12:54.504241 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 16 15:12:55 crc kubenswrapper[4748]: I0216 15:12:55.648755 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-8q6bn" podUID="5282d6ba-c0a4-4ada-9ffb-d233444b10f1" containerName="ovn-controller" probeResult="failure" output=< Feb 16 15:12:55 crc kubenswrapper[4748]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 16 15:12:55 crc kubenswrapper[4748]: > Feb 16 15:12:55 crc kubenswrapper[4748]: I0216 15:12:55.672626 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:12:55 crc kubenswrapper[4748]: I0216 15:12:55.672677 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rhspv" Feb 16 15:12:55 crc kubenswrapper[4748]: I0216 15:12:55.884282 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-8q6bn-config-wcj92"] Feb 16 15:12:55 crc kubenswrapper[4748]: I0216 15:12:55.885750 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:55 crc kubenswrapper[4748]: I0216 15:12:55.889598 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 16 15:12:55 crc kubenswrapper[4748]: I0216 15:12:55.899006 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8q6bn-config-wcj92"] Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.011087 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg57f\" (UniqueName: \"kubernetes.io/projected/ac93cc07-b68d-45f4-acca-c786541b4070-kube-api-access-wg57f\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.011300 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-additional-scripts\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.011417 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.011476 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run-ovn\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.011627 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-log-ovn\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.011657 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-scripts\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.113492 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-additional-scripts\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.113997 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.114232 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run-ovn\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.114451 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-log-ovn\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.114627 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-scripts\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.114962 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg57f\" (UniqueName: \"kubernetes.io/projected/ac93cc07-b68d-45f4-acca-c786541b4070-kube-api-access-wg57f\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.115121 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run-ovn\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.114697 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-additional-scripts\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.115489 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-log-ovn\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.116168 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.119256 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-scripts\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.144065 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg57f\" (UniqueName: \"kubernetes.io/projected/ac93cc07-b68d-45f4-acca-c786541b4070-kube-api-access-wg57f\") pod \"ovn-controller-8q6bn-config-wcj92\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.209790 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.259892 4748 generic.go:334] "Generic (PLEG): container finished" podID="b081f805-b462-406b-9d37-5aef68dd9edc" containerID="76ef5b5c0f1948a2c1a32e73350d994a797b036d9230aae2020e77c7e988c599" exitCode=0 Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.259962 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b081f805-b462-406b-9d37-5aef68dd9edc","Type":"ContainerDied","Data":"76ef5b5c0f1948a2c1a32e73350d994a797b036d9230aae2020e77c7e988c599"} Feb 16 15:12:56 crc kubenswrapper[4748]: I0216 15:12:56.944670 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.039275 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-combined-ca-bundle\") pod \"87853597-3b96-46e9-803b-ce992b010f0b\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.039686 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-swiftconf\") pod \"87853597-3b96-46e9-803b-ce992b010f0b\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.039729 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjjqr\" (UniqueName: \"kubernetes.io/projected/87853597-3b96-46e9-803b-ce992b010f0b-kube-api-access-hjjqr\") pod \"87853597-3b96-46e9-803b-ce992b010f0b\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.039757 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-scripts\") pod \"87853597-3b96-46e9-803b-ce992b010f0b\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.039783 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/87853597-3b96-46e9-803b-ce992b010f0b-etc-swift\") pod \"87853597-3b96-46e9-803b-ce992b010f0b\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.039838 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-ring-data-devices\") pod \"87853597-3b96-46e9-803b-ce992b010f0b\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.039889 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-dispersionconf\") pod \"87853597-3b96-46e9-803b-ce992b010f0b\" (UID: \"87853597-3b96-46e9-803b-ce992b010f0b\") " Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.042330 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87853597-3b96-46e9-803b-ce992b010f0b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "87853597-3b96-46e9-803b-ce992b010f0b" (UID: "87853597-3b96-46e9-803b-ce992b010f0b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.046186 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "87853597-3b96-46e9-803b-ce992b010f0b" (UID: "87853597-3b96-46e9-803b-ce992b010f0b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.049867 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87853597-3b96-46e9-803b-ce992b010f0b-kube-api-access-hjjqr" (OuterVolumeSpecName: "kube-api-access-hjjqr") pod "87853597-3b96-46e9-803b-ce992b010f0b" (UID: "87853597-3b96-46e9-803b-ce992b010f0b"). InnerVolumeSpecName "kube-api-access-hjjqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.061922 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "87853597-3b96-46e9-803b-ce992b010f0b" (UID: "87853597-3b96-46e9-803b-ce992b010f0b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.109650 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87853597-3b96-46e9-803b-ce992b010f0b" (UID: "87853597-3b96-46e9-803b-ce992b010f0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.111015 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "87853597-3b96-46e9-803b-ce992b010f0b" (UID: "87853597-3b96-46e9-803b-ce992b010f0b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.118542 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-scripts" (OuterVolumeSpecName: "scripts") pod "87853597-3b96-46e9-803b-ce992b010f0b" (UID: "87853597-3b96-46e9-803b-ce992b010f0b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.142417 4748 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.142461 4748 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.142473 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.142485 4748 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/87853597-3b96-46e9-803b-ce992b010f0b-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.142499 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjjqr\" (UniqueName: \"kubernetes.io/projected/87853597-3b96-46e9-803b-ce992b010f0b-kube-api-access-hjjqr\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.142536 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/87853597-3b96-46e9-803b-ce992b010f0b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.142547 4748 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/87853597-3b96-46e9-803b-ce992b010f0b-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.243305 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-8q6bn-config-wcj92"] Feb 16 15:12:57 crc kubenswrapper[4748]: W0216 15:12:57.258069 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac93cc07_b68d_45f4_acca_c786541b4070.slice/crio-bcd2c6d89dc060f9b5f8d576faf292c4c999e5abdc6224a4eeffb6a201038352 WatchSource:0}: Error finding container bcd2c6d89dc060f9b5f8d576faf292c4c999e5abdc6224a4eeffb6a201038352: Status 404 returned error can't find the container with id bcd2c6d89dc060f9b5f8d576faf292c4c999e5abdc6224a4eeffb6a201038352 Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.285893 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"da81a287-d981-4b30-8d23-70cbc085368e","Type":"ContainerStarted","Data":"a640d33e1229b7903e76c8e477ef7aeaba7c56e4e4bc863e5a3e1108341be6c0"} Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.286922 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.289234 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8q6bn-config-wcj92" event={"ID":"ac93cc07-b68d-45f4-acca-c786541b4070","Type":"ContainerStarted","Data":"bcd2c6d89dc060f9b5f8d576faf292c4c999e5abdc6224a4eeffb6a201038352"} Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.293893 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-tpchh" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.295990 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-tpchh" event={"ID":"87853597-3b96-46e9-803b-ce992b010f0b","Type":"ContainerDied","Data":"88f20d76f4a459f57dda1deb5e2191afbb6a71fe93d87d21babf930d975c5626"} Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.296220 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88f20d76f4a459f57dda1deb5e2191afbb6a71fe93d87d21babf930d975c5626" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.306394 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"4e58d8bd647339128e8c6e4e323865ab87b947627ab4226f79e48de8462e0742"} Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.314339 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerStarted","Data":"a8d90bf5afb71766722cee26af3b301484c8aa5a957ce505db934400dd89baf9"} Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.327334 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=64.410872123 podStartE2EDuration="1m28.327317119s" podCreationTimestamp="2026-02-16 15:11:29 +0000 UTC" firstStartedPulling="2026-02-16 15:11:50.805759899 +0000 UTC m=+1136.497428938" lastFinishedPulling="2026-02-16 15:12:14.722204905 +0000 UTC m=+1160.413873934" observedRunningTime="2026-02-16 15:12:57.322772407 +0000 UTC m=+1203.014441446" watchObservedRunningTime="2026-02-16 15:12:57.327317119 +0000 UTC m=+1203.018986158" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.340833 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b081f805-b462-406b-9d37-5aef68dd9edc","Type":"ContainerStarted","Data":"ea733f9f9bd249cfaa372cc2f0801110f41d6a5f9daea814525ae8d73c1a05fe"} Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.341075 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.384903 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371948.46989 podStartE2EDuration="1m28.384885089s" podCreationTimestamp="2026-02-16 15:11:29 +0000 UTC" firstStartedPulling="2026-02-16 15:11:50.739180628 +0000 UTC m=+1136.430849667" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:12:57.377528789 +0000 UTC m=+1203.069197838" watchObservedRunningTime="2026-02-16 15:12:57.384885089 +0000 UTC m=+1203.076554128" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.485243 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qxd97"] Feb 16 15:12:57 crc kubenswrapper[4748]: E0216 15:12:57.485631 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87853597-3b96-46e9-803b-ce992b010f0b" containerName="swift-ring-rebalance" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.485648 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="87853597-3b96-46e9-803b-ce992b010f0b" containerName="swift-ring-rebalance" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.485818 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="87853597-3b96-46e9-803b-ce992b010f0b" containerName="swift-ring-rebalance" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.486430 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.492060 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.516751 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qxd97"] Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.655891 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-operator-scripts\") pod \"root-account-create-update-qxd97\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.655973 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztwqc\" (UniqueName: \"kubernetes.io/projected/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-kube-api-access-ztwqc\") pod \"root-account-create-update-qxd97\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.758389 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztwqc\" (UniqueName: \"kubernetes.io/projected/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-kube-api-access-ztwqc\") pod \"root-account-create-update-qxd97\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.758629 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-operator-scripts\") pod \"root-account-create-update-qxd97\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.759692 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-operator-scripts\") pod \"root-account-create-update-qxd97\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.779277 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztwqc\" (UniqueName: \"kubernetes.io/projected/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-kube-api-access-ztwqc\") pod \"root-account-create-update-qxd97\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:57 crc kubenswrapper[4748]: I0216 15:12:57.811633 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qxd97" Feb 16 15:12:58 crc kubenswrapper[4748]: I0216 15:12:58.349698 4748 generic.go:334] "Generic (PLEG): container finished" podID="ac93cc07-b68d-45f4-acca-c786541b4070" containerID="6a28fc7a1fd9d13104051767db0a555f0c012e2f00e0a7f482c728bd59fb30af" exitCode=0 Feb 16 15:12:58 crc kubenswrapper[4748]: I0216 15:12:58.349757 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8q6bn-config-wcj92" event={"ID":"ac93cc07-b68d-45f4-acca-c786541b4070","Type":"ContainerDied","Data":"6a28fc7a1fd9d13104051767db0a555f0c012e2f00e0a7f482c728bd59fb30af"} Feb 16 15:12:58 crc kubenswrapper[4748]: I0216 15:12:58.373587 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qxd97"] Feb 16 15:12:58 crc kubenswrapper[4748]: W0216 15:12:58.496765 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef384d40_8b76_4b89_ada6_33dc0ff2e6fe.slice/crio-3e7da5af076f4c08f5aac44a9a0de276c973e3852c5c602369e51e6f1b0872e1 WatchSource:0}: Error finding container 3e7da5af076f4c08f5aac44a9a0de276c973e3852c5c602369e51e6f1b0872e1: Status 404 returned error can't find the container with id 3e7da5af076f4c08f5aac44a9a0de276c973e3852c5c602369e51e6f1b0872e1 Feb 16 15:12:59 crc kubenswrapper[4748]: I0216 15:12:59.156926 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ecd4cdcd-6dc0-4bba-980e-019d6eae5251" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:12:59 crc kubenswrapper[4748]: I0216 15:12:59.362345 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"54cb22a7b6d1b8d5c87e8e546b248c5252bc39976cfb3025e5cc60b0619581b4"} Feb 16 15:12:59 crc kubenswrapper[4748]: I0216 15:12:59.362406 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"1329e6b631f54a52c66c654645a84c7c422cab99a239bf9f110dcc112127a91f"} Feb 16 15:12:59 crc kubenswrapper[4748]: I0216 15:12:59.365957 4748 generic.go:334] "Generic (PLEG): container finished" podID="ef384d40-8b76-4b89-ada6-33dc0ff2e6fe" containerID="1287f70c00a10ca1f2e888e95057f7787da63b1159c9529783addd1ba94f678a" exitCode=0 Feb 16 15:12:59 crc kubenswrapper[4748]: I0216 15:12:59.366601 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qxd97" event={"ID":"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe","Type":"ContainerDied","Data":"1287f70c00a10ca1f2e888e95057f7787da63b1159c9529783addd1ba94f678a"} Feb 16 15:12:59 crc kubenswrapper[4748]: I0216 15:12:59.366641 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qxd97" event={"ID":"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe","Type":"ContainerStarted","Data":"3e7da5af076f4c08f5aac44a9a0de276c973e3852c5c602369e51e6f1b0872e1"} Feb 16 15:12:59 crc kubenswrapper[4748]: I0216 15:12:59.857451 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008360 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-log-ovn\") pod \"ac93cc07-b68d-45f4-acca-c786541b4070\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008410 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run\") pod \"ac93cc07-b68d-45f4-acca-c786541b4070\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008454 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ac93cc07-b68d-45f4-acca-c786541b4070" (UID: "ac93cc07-b68d-45f4-acca-c786541b4070"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008584 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-additional-scripts\") pod \"ac93cc07-b68d-45f4-acca-c786541b4070\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008589 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run" (OuterVolumeSpecName: "var-run") pod "ac93cc07-b68d-45f4-acca-c786541b4070" (UID: "ac93cc07-b68d-45f4-acca-c786541b4070"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008659 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg57f\" (UniqueName: \"kubernetes.io/projected/ac93cc07-b68d-45f4-acca-c786541b4070-kube-api-access-wg57f\") pod \"ac93cc07-b68d-45f4-acca-c786541b4070\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008702 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-scripts\") pod \"ac93cc07-b68d-45f4-acca-c786541b4070\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008750 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run-ovn\") pod \"ac93cc07-b68d-45f4-acca-c786541b4070\" (UID: \"ac93cc07-b68d-45f4-acca-c786541b4070\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.008947 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ac93cc07-b68d-45f4-acca-c786541b4070" (UID: "ac93cc07-b68d-45f4-acca-c786541b4070"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.009332 4748 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.009350 4748 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.009359 4748 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ac93cc07-b68d-45f4-acca-c786541b4070-var-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.009659 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "ac93cc07-b68d-45f4-acca-c786541b4070" (UID: "ac93cc07-b68d-45f4-acca-c786541b4070"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.009897 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-scripts" (OuterVolumeSpecName: "scripts") pod "ac93cc07-b68d-45f4-acca-c786541b4070" (UID: "ac93cc07-b68d-45f4-acca-c786541b4070"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.013788 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac93cc07-b68d-45f4-acca-c786541b4070-kube-api-access-wg57f" (OuterVolumeSpecName: "kube-api-access-wg57f") pod "ac93cc07-b68d-45f4-acca-c786541b4070" (UID: "ac93cc07-b68d-45f4-acca-c786541b4070"). InnerVolumeSpecName "kube-api-access-wg57f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.111734 4748 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.111774 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg57f\" (UniqueName: \"kubernetes.io/projected/ac93cc07-b68d-45f4-acca-c786541b4070-kube-api-access-wg57f\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.111788 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ac93cc07-b68d-45f4-acca-c786541b4070-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.381439 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"0610393a0e585693578b75fe8b86b6cee0c671afecb0219f0c4e3e130f984743"} Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.381513 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"5f5113fd906f6d3d8410f36be42309887118c743a87dfc17464df2e8d5bc4e68"} Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.385328 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerStarted","Data":"8f18b3963866d9bfd967865fcf5a6626796ceec51ae86e9d31784f1c02f86c76"} Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.387513 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-8q6bn-config-wcj92" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.387514 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-8q6bn-config-wcj92" event={"ID":"ac93cc07-b68d-45f4-acca-c786541b4070","Type":"ContainerDied","Data":"bcd2c6d89dc060f9b5f8d576faf292c4c999e5abdc6224a4eeffb6a201038352"} Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.387593 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcd2c6d89dc060f9b5f8d576faf292c4c999e5abdc6224a4eeffb6a201038352" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.641326 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-8q6bn" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.797846 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qxd97" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.930519 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-operator-scripts\") pod \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.930693 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztwqc\" (UniqueName: \"kubernetes.io/projected/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-kube-api-access-ztwqc\") pod \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\" (UID: \"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe\") " Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.931359 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef384d40-8b76-4b89-ada6-33dc0ff2e6fe" (UID: "ef384d40-8b76-4b89-ada6-33dc0ff2e6fe"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:00 crc kubenswrapper[4748]: I0216 15:13:00.940196 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-kube-api-access-ztwqc" (OuterVolumeSpecName: "kube-api-access-ztwqc") pod "ef384d40-8b76-4b89-ada6-33dc0ff2e6fe" (UID: "ef384d40-8b76-4b89-ada6-33dc0ff2e6fe"). InnerVolumeSpecName "kube-api-access-ztwqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:01 crc kubenswrapper[4748]: I0216 15:13:01.022174 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-8q6bn-config-wcj92"] Feb 16 15:13:01 crc kubenswrapper[4748]: I0216 15:13:01.032952 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztwqc\" (UniqueName: \"kubernetes.io/projected/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-kube-api-access-ztwqc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:01 crc kubenswrapper[4748]: I0216 15:13:01.032989 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:01 crc kubenswrapper[4748]: I0216 15:13:01.046418 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-8q6bn-config-wcj92"] Feb 16 15:13:01 crc kubenswrapper[4748]: I0216 15:13:01.404218 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qxd97" event={"ID":"ef384d40-8b76-4b89-ada6-33dc0ff2e6fe","Type":"ContainerDied","Data":"3e7da5af076f4c08f5aac44a9a0de276c973e3852c5c602369e51e6f1b0872e1"} Feb 16 15:13:01 crc kubenswrapper[4748]: I0216 15:13:01.404270 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e7da5af076f4c08f5aac44a9a0de276c973e3852c5c602369e51e6f1b0872e1" Feb 16 15:13:01 crc kubenswrapper[4748]: I0216 15:13:01.404336 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qxd97" Feb 16 15:13:02 crc kubenswrapper[4748]: I0216 15:13:02.442120 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"1d7e5e4736cd2c0e58f365f0c0ab380e49ce19899ef68065f70cc54b2516e4e9"} Feb 16 15:13:02 crc kubenswrapper[4748]: I0216 15:13:02.442441 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"eedbcd4a99e811860b60170526e40cd32c29bd390a4b2732b34ae1c78e025730"} Feb 16 15:13:03 crc kubenswrapper[4748]: I0216 15:13:03.010688 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac93cc07-b68d-45f4-acca-c786541b4070" path="/var/lib/kubelet/pods/ac93cc07-b68d-45f4-acca-c786541b4070/volumes" Feb 16 15:13:03 crc kubenswrapper[4748]: I0216 15:13:03.464947 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"6ba0e3475795910da3addca82091e9ce57c233736db808a8f4cc18710259e39c"} Feb 16 15:13:04 crc kubenswrapper[4748]: I0216 15:13:04.477683 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"3179ff9c3491c4b66c205e2661d27f44df8934eb0087c3f7426afbf484e16a33"} Feb 16 15:13:04 crc kubenswrapper[4748]: I0216 15:13:04.480814 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerStarted","Data":"4851e525e8023ddb80823c43cb1105b73cad6f36eed0fd6e2f8d806d375de953"} Feb 16 15:13:04 crc kubenswrapper[4748]: I0216 15:13:04.510818 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.347869288 podStartE2EDuration="1m28.510798377s" podCreationTimestamp="2026-02-16 15:11:36 +0000 UTC" firstStartedPulling="2026-02-16 15:11:51.61443429 +0000 UTC m=+1137.306103329" lastFinishedPulling="2026-02-16 15:13:03.777363379 +0000 UTC m=+1209.469032418" observedRunningTime="2026-02-16 15:13:04.509104865 +0000 UTC m=+1210.200773904" watchObservedRunningTime="2026-02-16 15:13:04.510798377 +0000 UTC m=+1210.202467416" Feb 16 15:13:04 crc kubenswrapper[4748]: I0216 15:13:04.729304 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:13:04 crc kubenswrapper[4748]: I0216 15:13:04.729360 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:13:07 crc kubenswrapper[4748]: I0216 15:13:07.603058 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:07 crc kubenswrapper[4748]: I0216 15:13:07.603413 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:07 crc kubenswrapper[4748]: I0216 15:13:07.606238 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:08 crc kubenswrapper[4748]: I0216 15:13:08.538871 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:09 crc kubenswrapper[4748]: I0216 15:13:09.156830 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="ecd4cdcd-6dc0-4bba-980e-019d6eae5251" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 16 15:13:10 crc kubenswrapper[4748]: I0216 15:13:10.917608 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:10 crc kubenswrapper[4748]: I0216 15:13:10.918745 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="prometheus" containerID="cri-o://a8d90bf5afb71766722cee26af3b301484c8aa5a957ce505db934400dd89baf9" gracePeriod=600 Feb 16 15:13:10 crc kubenswrapper[4748]: I0216 15:13:10.918901 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="thanos-sidecar" containerID="cri-o://4851e525e8023ddb80823c43cb1105b73cad6f36eed0fd6e2f8d806d375de953" gracePeriod=600 Feb 16 15:13:10 crc kubenswrapper[4748]: I0216 15:13:10.918915 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="config-reloader" containerID="cri-o://8f18b3963866d9bfd967865fcf5a6626796ceec51ae86e9d31784f1c02f86c76" gracePeriod=600 Feb 16 15:13:10 crc kubenswrapper[4748]: I0216 15:13:10.957178 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.253076 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.599564 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-22bvq"] Feb 16 15:13:11 crc kubenswrapper[4748]: E0216 15:13:11.600734 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac93cc07-b68d-45f4-acca-c786541b4070" containerName="ovn-config" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.600751 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac93cc07-b68d-45f4-acca-c786541b4070" containerName="ovn-config" Feb 16 15:13:11 crc kubenswrapper[4748]: E0216 15:13:11.600766 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef384d40-8b76-4b89-ada6-33dc0ff2e6fe" containerName="mariadb-account-create-update" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.600772 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef384d40-8b76-4b89-ada6-33dc0ff2e6fe" containerName="mariadb-account-create-update" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.600958 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac93cc07-b68d-45f4-acca-c786541b4070" containerName="ovn-config" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.600982 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef384d40-8b76-4b89-ada6-33dc0ff2e6fe" containerName="mariadb-account-create-update" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.601635 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.606266 4748 generic.go:334] "Generic (PLEG): container finished" podID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerID="4851e525e8023ddb80823c43cb1105b73cad6f36eed0fd6e2f8d806d375de953" exitCode=0 Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.606298 4748 generic.go:334] "Generic (PLEG): container finished" podID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerID="8f18b3963866d9bfd967865fcf5a6626796ceec51ae86e9d31784f1c02f86c76" exitCode=0 Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.606307 4748 generic.go:334] "Generic (PLEG): container finished" podID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerID="a8d90bf5afb71766722cee26af3b301484c8aa5a957ce505db934400dd89baf9" exitCode=0 Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.606327 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerDied","Data":"4851e525e8023ddb80823c43cb1105b73cad6f36eed0fd6e2f8d806d375de953"} Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.606353 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerDied","Data":"8f18b3963866d9bfd967865fcf5a6626796ceec51ae86e9d31784f1c02f86c76"} Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.606369 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerDied","Data":"a8d90bf5afb71766722cee26af3b301484c8aa5a957ce505db934400dd89baf9"} Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.639648 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-22bvq"] Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.688783 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6p9g\" (UniqueName: \"kubernetes.io/projected/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-kube-api-access-x6p9g\") pod \"cloudkitty-db-create-22bvq\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.688864 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-operator-scripts\") pod \"cloudkitty-db-create-22bvq\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.790435 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6p9g\" (UniqueName: \"kubernetes.io/projected/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-kube-api-access-x6p9g\") pod \"cloudkitty-db-create-22bvq\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.790505 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-operator-scripts\") pod \"cloudkitty-db-create-22bvq\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.791225 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-operator-scripts\") pod \"cloudkitty-db-create-22bvq\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.811131 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xjt7m"] Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.827174 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.839133 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6p9g\" (UniqueName: \"kubernetes.io/projected/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-kube-api-access-x6p9g\") pod \"cloudkitty-db-create-22bvq\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.892443 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86qq\" (UniqueName: \"kubernetes.io/projected/16626919-d38e-4cb5-8661-1b8e78e3967f-kube-api-access-d86qq\") pod \"cinder-db-create-xjt7m\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.892561 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16626919-d38e-4cb5-8661-1b8e78e3967f-operator-scripts\") pod \"cinder-db-create-xjt7m\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.919482 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xjt7m"] Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.932216 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.998726 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86qq\" (UniqueName: \"kubernetes.io/projected/16626919-d38e-4cb5-8661-1b8e78e3967f-kube-api-access-d86qq\") pod \"cinder-db-create-xjt7m\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:11 crc kubenswrapper[4748]: I0216 15:13:11.999355 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16626919-d38e-4cb5-8661-1b8e78e3967f-operator-scripts\") pod \"cinder-db-create-xjt7m\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.000667 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16626919-d38e-4cb5-8661-1b8e78e3967f-operator-scripts\") pod \"cinder-db-create-xjt7m\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.009988 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-ebaf-account-create-update-vxgmw"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.011512 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.015578 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.025946 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ebaf-account-create-update-vxgmw"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.048782 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86qq\" (UniqueName: \"kubernetes.io/projected/16626919-d38e-4cb5-8661-1b8e78e3967f-kube-api-access-d86qq\") pod \"cinder-db-create-xjt7m\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.110149 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkpkw\" (UniqueName: \"kubernetes.io/projected/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-kube-api-access-qkpkw\") pod \"cinder-ebaf-account-create-update-vxgmw\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.110276 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-operator-scripts\") pod \"cinder-ebaf-account-create-update-vxgmw\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.137832 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-t2thf"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.139041 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.166075 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-1694-account-create-update-vsgkb"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.167220 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.177156 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.198432 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t2thf"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.212322 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-operator-scripts\") pod \"cinder-ebaf-account-create-update-vxgmw\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.212371 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-operator-scripts\") pod \"neutron-1694-account-create-update-vsgkb\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.212398 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbsdh\" (UniqueName: \"kubernetes.io/projected/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-kube-api-access-wbsdh\") pod \"neutron-db-create-t2thf\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.212420 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxtk9\" (UniqueName: \"kubernetes.io/projected/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-kube-api-access-jxtk9\") pod \"neutron-1694-account-create-update-vsgkb\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.212479 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkpkw\" (UniqueName: \"kubernetes.io/projected/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-kube-api-access-qkpkw\") pod \"cinder-ebaf-account-create-update-vxgmw\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.212498 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-operator-scripts\") pod \"neutron-db-create-t2thf\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.213199 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-operator-scripts\") pod \"cinder-ebaf-account-create-update-vxgmw\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.218856 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1694-account-create-update-vsgkb"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.223797 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.272336 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkpkw\" (UniqueName: \"kubernetes.io/projected/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-kube-api-access-qkpkw\") pod \"cinder-ebaf-account-create-update-vxgmw\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.280788 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-34d8-account-create-update-gcgxs"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.282340 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.288042 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.313896 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-operator-scripts\") pod \"neutron-1694-account-create-update-vsgkb\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.314259 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbsdh\" (UniqueName: \"kubernetes.io/projected/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-kube-api-access-wbsdh\") pod \"neutron-db-create-t2thf\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.314406 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxtk9\" (UniqueName: \"kubernetes.io/projected/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-kube-api-access-jxtk9\") pod \"neutron-1694-account-create-update-vsgkb\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.314549 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-operator-scripts\") pod \"neutron-db-create-t2thf\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.315241 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-operator-scripts\") pod \"neutron-db-create-t2thf\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.315855 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-operator-scripts\") pod \"neutron-1694-account-create-update-vsgkb\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.316514 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-34d8-account-create-update-gcgxs"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.331434 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-59h2b"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.332627 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.333560 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.352894 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-59h2b"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.356303 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxtk9\" (UniqueName: \"kubernetes.io/projected/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-kube-api-access-jxtk9\") pod \"neutron-1694-account-create-update-vsgkb\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.364380 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbsdh\" (UniqueName: \"kubernetes.io/projected/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-kube-api-access-wbsdh\") pod \"neutron-db-create-t2thf\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.415989 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4s5\" (UniqueName: \"kubernetes.io/projected/dc357533-40d6-4300-982f-dcfc7f6219db-kube-api-access-kf4s5\") pod \"barbican-34d8-account-create-update-gcgxs\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.416048 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-operator-scripts\") pod \"barbican-db-create-59h2b\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.416074 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc357533-40d6-4300-982f-dcfc7f6219db-operator-scripts\") pod \"barbican-34d8-account-create-update-gcgxs\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.416108 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-524zz\" (UniqueName: \"kubernetes.io/projected/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-kube-api-access-524zz\") pod \"barbican-db-create-59h2b\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.441964 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-6b27-account-create-update-nvlxz"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.443746 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.449288 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.471144 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-6b27-account-create-update-nvlxz"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.475342 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.500012 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.521871 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4s5\" (UniqueName: \"kubernetes.io/projected/dc357533-40d6-4300-982f-dcfc7f6219db-kube-api-access-kf4s5\") pod \"barbican-34d8-account-create-update-gcgxs\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.521924 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-operator-scripts\") pod \"cloudkitty-6b27-account-create-update-nvlxz\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.521951 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-operator-scripts\") pod \"barbican-db-create-59h2b\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.521979 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc357533-40d6-4300-982f-dcfc7f6219db-operator-scripts\") pod \"barbican-34d8-account-create-update-gcgxs\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.522001 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpbmk\" (UniqueName: \"kubernetes.io/projected/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-kube-api-access-rpbmk\") pod \"cloudkitty-6b27-account-create-update-nvlxz\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.522039 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-524zz\" (UniqueName: \"kubernetes.io/projected/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-kube-api-access-524zz\") pod \"barbican-db-create-59h2b\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.523262 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-operator-scripts\") pod \"barbican-db-create-59h2b\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.523785 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc357533-40d6-4300-982f-dcfc7f6219db-operator-scripts\") pod \"barbican-34d8-account-create-update-gcgxs\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.539584 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-dz5pf"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.541461 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.544303 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.544530 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.544584 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.544899 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cd9jb" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.546013 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-524zz\" (UniqueName: \"kubernetes.io/projected/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-kube-api-access-524zz\") pod \"barbican-db-create-59h2b\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.549046 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4s5\" (UniqueName: \"kubernetes.io/projected/dc357533-40d6-4300-982f-dcfc7f6219db-kube-api-access-kf4s5\") pod \"barbican-34d8-account-create-update-gcgxs\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.565491 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dz5pf"] Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.602768 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="prometheus" probeResult="failure" output="Get \"http://10.217.0.112:9090/-/ready\": dial tcp 10.217.0.112:9090: connect: connection refused" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.623438 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpbmk\" (UniqueName: \"kubernetes.io/projected/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-kube-api-access-rpbmk\") pod \"cloudkitty-6b27-account-create-update-nvlxz\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.623498 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7zhc\" (UniqueName: \"kubernetes.io/projected/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-kube-api-access-l7zhc\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.623562 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-combined-ca-bundle\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.623634 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-config-data\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.623691 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-operator-scripts\") pod \"cloudkitty-6b27-account-create-update-nvlxz\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.624423 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-operator-scripts\") pod \"cloudkitty-6b27-account-create-update-nvlxz\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.640155 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.640337 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpbmk\" (UniqueName: \"kubernetes.io/projected/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-kube-api-access-rpbmk\") pod \"cloudkitty-6b27-account-create-update-nvlxz\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.725569 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-config-data\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.725948 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7zhc\" (UniqueName: \"kubernetes.io/projected/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-kube-api-access-l7zhc\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.726028 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-combined-ca-bundle\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.729740 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-config-data\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.731788 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-combined-ca-bundle\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.743697 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7zhc\" (UniqueName: \"kubernetes.io/projected/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-kube-api-access-l7zhc\") pod \"keystone-db-sync-dz5pf\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.752974 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.775899 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:12 crc kubenswrapper[4748]: I0216 15:13:12.859646 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.675741 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.759665 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-0\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.760134 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-thanos-prometheus-http-client-file\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.760180 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config-out\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.760287 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkwcz\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-kube-api-access-lkwcz\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.761055 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.762161 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.762212 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-2\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.762230 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.762288 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-tls-assets\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.762331 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-web-config\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.762388 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-1\") pod \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\" (UID: \"8ce4009b-ef44-4224-a7ab-0d514eccbabb\") " Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.765137 4748 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.766757 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.770964 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.794090 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config" (OuterVolumeSpecName: "config") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.807080 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config-out" (OuterVolumeSpecName: "config-out") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.807786 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-kube-api-access-lkwcz" (OuterVolumeSpecName: "kube-api-access-lkwcz") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "kube-api-access-lkwcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.829289 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.831597 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.837623 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-web-config" (OuterVolumeSpecName: "web-config") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874564 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkwcz\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-kube-api-access-lkwcz\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874602 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874617 4748 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874630 4748 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8ce4009b-ef44-4224-a7ab-0d514eccbabb-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874645 4748 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-web-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874657 4748 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/8ce4009b-ef44-4224-a7ab-0d514eccbabb-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874673 4748 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8ce4009b-ef44-4224-a7ab-0d514eccbabb-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.874683 4748 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8ce4009b-ef44-4224-a7ab-0d514eccbabb-config-out\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.893621 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "8ce4009b-ef44-4224-a7ab-0d514eccbabb" (UID: "8ce4009b-ef44-4224-a7ab-0d514eccbabb"). InnerVolumeSpecName "pvc-145c2255-33ef-4af7-bf65-22c51b538f3f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:13:13 crc kubenswrapper[4748]: I0216 15:13:13.976387 4748 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") on node \"crc\" " Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.092074 4748 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.092803 4748 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-145c2255-33ef-4af7-bf65-22c51b538f3f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f") on node "crc" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.180198 4748 reconciler_common.go:293] "Volume detached for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.305293 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-1694-account-create-update-vsgkb"] Feb 16 15:13:14 crc kubenswrapper[4748]: W0216 15:13:14.320528 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode25a5cff_5c50_4edf_a118_08d53e0f1cdf.slice/crio-ab3a18520f88857c273637e68c2d53980ba946db284526ec0805273ec97b4166 WatchSource:0}: Error finding container ab3a18520f88857c273637e68c2d53980ba946db284526ec0805273ec97b4166: Status 404 returned error can't find the container with id ab3a18520f88857c273637e68c2d53980ba946db284526ec0805273ec97b4166 Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.672963 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1694-account-create-update-vsgkb" event={"ID":"e25a5cff-5c50-4edf-a118-08d53e0f1cdf","Type":"ContainerStarted","Data":"ab3a18520f88857c273637e68c2d53980ba946db284526ec0805273ec97b4166"} Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.684322 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xjt7m"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.685969 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zpx" event={"ID":"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe","Type":"ContainerStarted","Data":"29119b67c71cde604d33987ff5de6501dc5d0f1b8613233bcb4868750fbc7699"} Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.703081 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-t2thf"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.705934 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"8ce4009b-ef44-4224-a7ab-0d514eccbabb","Type":"ContainerDied","Data":"49e08f28ba315f5d6092adbc8dabda4c13901de2cff76c0f3980c7a8c87d8842"} Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.705981 4748 scope.go:117] "RemoveContainer" containerID="4851e525e8023ddb80823c43cb1105b73cad6f36eed0fd6e2f8d806d375de953" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.706130 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.729788 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-b2zpx" podStartSLOduration=5.711544339 podStartE2EDuration="25.729770898s" podCreationTimestamp="2026-02-16 15:12:49 +0000 UTC" firstStartedPulling="2026-02-16 15:12:53.564680623 +0000 UTC m=+1199.256349662" lastFinishedPulling="2026-02-16 15:13:13.582907182 +0000 UTC m=+1219.274576221" observedRunningTime="2026-02-16 15:13:14.727184214 +0000 UTC m=+1220.418853253" watchObservedRunningTime="2026-02-16 15:13:14.729770898 +0000 UTC m=+1220.421439937" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.774391 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-34d8-account-create-update-gcgxs"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.783301 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.802338 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-dz5pf"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.815940 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-22bvq"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.830961 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.846562 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-ebaf-account-create-update-vxgmw"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.865807 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-6b27-account-create-update-nvlxz"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.884794 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-59h2b"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.901877 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:14 crc kubenswrapper[4748]: E0216 15:13:14.902291 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="init-config-reloader" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.902313 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="init-config-reloader" Feb 16 15:13:14 crc kubenswrapper[4748]: E0216 15:13:14.902327 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="prometheus" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.902334 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="prometheus" Feb 16 15:13:14 crc kubenswrapper[4748]: E0216 15:13:14.902349 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="config-reloader" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.902355 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="config-reloader" Feb 16 15:13:14 crc kubenswrapper[4748]: E0216 15:13:14.902367 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="thanos-sidecar" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.902373 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="thanos-sidecar" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.902573 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="prometheus" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.902595 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="config-reloader" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.902607 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" containerName="thanos-sidecar" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.904379 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.910431 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.910708 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.910845 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.911080 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-r4jkk" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.911086 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.911253 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.918775 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.919042 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.919229 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 16 15:13:14 crc kubenswrapper[4748]: I0216 15:13:14.921853 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004312 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3bc07b5c-b3d3-4ba1-b580-30e09261edab-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004657 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004686 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004768 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004792 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004812 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004840 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqqx4\" (UniqueName: \"kubernetes.io/projected/3bc07b5c-b3d3-4ba1-b580-30e09261edab-kube-api-access-pqqx4\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004863 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004881 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-config\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004901 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004920 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.004972 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3bc07b5c-b3d3-4ba1-b580-30e09261edab-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.005283 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.057742 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ce4009b-ef44-4224-a7ab-0d514eccbabb" path="/var/lib/kubelet/pods/8ce4009b-ef44-4224-a7ab-0d514eccbabb/volumes" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109283 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109343 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109367 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109403 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqqx4\" (UniqueName: \"kubernetes.io/projected/3bc07b5c-b3d3-4ba1-b580-30e09261edab-kube-api-access-pqqx4\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109425 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109444 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-config\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109463 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109483 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109567 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3bc07b5c-b3d3-4ba1-b580-30e09261edab-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109635 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109681 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3bc07b5c-b3d3-4ba1-b580-30e09261edab-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109716 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.109767 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.115553 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.121434 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.122214 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3bc07b5c-b3d3-4ba1-b580-30e09261edab-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.152031 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.156154 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-config\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.159258 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.166615 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.167336 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.167548 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bc07b5c-b3d3-4ba1-b580-30e09261edab-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.175458 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3bc07b5c-b3d3-4ba1-b580-30e09261edab-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.176387 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.176431 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/36679523bf6c4e41657f99198418f26740d0c20952506ea2d93db24b45c1d1d0/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.182497 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqqx4\" (UniqueName: \"kubernetes.io/projected/3bc07b5c-b3d3-4ba1-b580-30e09261edab-kube-api-access-pqqx4\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.209766 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3bc07b5c-b3d3-4ba1-b580-30e09261edab-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.383436 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-145c2255-33ef-4af7-bf65-22c51b538f3f\") pod \"prometheus-metric-storage-0\" (UID: \"3bc07b5c-b3d3-4ba1-b580-30e09261edab\") " pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.527667 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:15 crc kubenswrapper[4748]: W0216 15:13:15.548244 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb1df7ac4_4495_45a5_bf86_2dda09e6f9b1.slice/crio-c62431557999bcbe4d6eb792ca975e846e45eb214ba544450baa5fda32e0d1ea WatchSource:0}: Error finding container c62431557999bcbe4d6eb792ca975e846e45eb214ba544450baa5fda32e0d1ea: Status 404 returned error can't find the container with id c62431557999bcbe4d6eb792ca975e846e45eb214ba544450baa5fda32e0d1ea Feb 16 15:13:15 crc kubenswrapper[4748]: W0216 15:13:15.560264 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16626919_d38e_4cb5_8661_1b8e78e3967f.slice/crio-cf7a203cfaf07d857606b658f0c92b861109cc28b77b8f3b49bd8f01b3160aef WatchSource:0}: Error finding container cf7a203cfaf07d857606b658f0c92b861109cc28b77b8f3b49bd8f01b3160aef: Status 404 returned error can't find the container with id cf7a203cfaf07d857606b658f0c92b861109cc28b77b8f3b49bd8f01b3160aef Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.634206 4748 scope.go:117] "RemoveContainer" containerID="8f18b3963866d9bfd967865fcf5a6626796ceec51ae86e9d31784f1c02f86c76" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.730249 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz5pf" event={"ID":"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e","Type":"ContainerStarted","Data":"414c3a8dc5a4ce9f7ae1fe3b788de6278422f85ffa4fe57cec98cef671658632"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.731342 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xjt7m" event={"ID":"16626919-d38e-4cb5-8661-1b8e78e3967f","Type":"ContainerStarted","Data":"cf7a203cfaf07d857606b658f0c92b861109cc28b77b8f3b49bd8f01b3160aef"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.732310 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-22bvq" event={"ID":"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1","Type":"ContainerStarted","Data":"c62431557999bcbe4d6eb792ca975e846e45eb214ba544450baa5fda32e0d1ea"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.736233 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59h2b" event={"ID":"1d1263a5-3517-4ba9-ad62-f35c7a6220c6","Type":"ContainerStarted","Data":"54293ee14d17962823045961cc2b045ade70701d240baf870ac40817f84fe4b5"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.736995 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34d8-account-create-update-gcgxs" event={"ID":"dc357533-40d6-4300-982f-dcfc7f6219db","Type":"ContainerStarted","Data":"b4d3e4c0731d01067d0544886d9bf19521495313c31f53fbef551175259c83d2"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.737924 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ebaf-account-create-update-vxgmw" event={"ID":"24a0bc3e-b389-4f0a-8a31-40869b3a3e77","Type":"ContainerStarted","Data":"d6acb02bdf5d64554cbb682f26f8691bf61b2f6bd244eee37bf276301934901e"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.742176 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2thf" event={"ID":"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a","Type":"ContainerStarted","Data":"27b5ac4d7d4c410a4c7c784d43e3f4155d571875f3d0a11f5ceed7d9422188de"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.744252 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" event={"ID":"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3","Type":"ContainerStarted","Data":"09ef48d5218dd772e2cf727c774a1ffe3d13ba8d855ec412e203220401140fa4"} Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.830564 4748 scope.go:117] "RemoveContainer" containerID="a8d90bf5afb71766722cee26af3b301484c8aa5a957ce505db934400dd89baf9" Feb 16 15:13:15 crc kubenswrapper[4748]: I0216 15:13:15.913899 4748 scope.go:117] "RemoveContainer" containerID="f38afe69bdd0099b17b3cd0853c0364ce1d1e25cd581aff8af825aca46249b6c" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.166873 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 16 15:13:17 crc kubenswrapper[4748]: W0216 15:13:16.244165 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bc07b5c_b3d3_4ba1_b580_30e09261edab.slice/crio-35bc20229c4b20f4a7d927abf1a73572995d9d03cf6ea80dd5acae6f46ef697f WatchSource:0}: Error finding container 35bc20229c4b20f4a7d927abf1a73572995d9d03cf6ea80dd5acae6f46ef697f: Status 404 returned error can't find the container with id 35bc20229c4b20f4a7d927abf1a73572995d9d03cf6ea80dd5acae6f46ef697f Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.759042 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34d8-account-create-update-gcgxs" event={"ID":"dc357533-40d6-4300-982f-dcfc7f6219db","Type":"ContainerStarted","Data":"42ebc32d0d54065ae1bdf3381a3da1ce50b5807aafce279f13f8e1002ff71a6c"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.763595 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1694-account-create-update-vsgkb" event={"ID":"e25a5cff-5c50-4edf-a118-08d53e0f1cdf","Type":"ContainerStarted","Data":"7bb8057d1ef92bbd9aa707b302d7b8367d26d18ebb6e07b6226bbb1e20734d54"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.773579 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"204424fbb352f9ff02b41210c9c8c9334bf491c2767585f4685fb5dc553bdefd"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.775599 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3bc07b5c-b3d3-4ba1-b580-30e09261edab","Type":"ContainerStarted","Data":"35bc20229c4b20f4a7d927abf1a73572995d9d03cf6ea80dd5acae6f46ef697f"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.798526 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2thf" event={"ID":"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a","Type":"ContainerStarted","Data":"f6f7fd1335c444fd75ce4057f57c3a8da08a32c71f88e1483fef747317a52e62"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.805084 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-34d8-account-create-update-gcgxs" podStartSLOduration=4.805066717 podStartE2EDuration="4.805066717s" podCreationTimestamp="2026-02-16 15:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.798125567 +0000 UTC m=+1222.489794596" watchObservedRunningTime="2026-02-16 15:13:16.805066717 +0000 UTC m=+1222.496735756" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.806540 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" event={"ID":"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3","Type":"ContainerStarted","Data":"2554e6f0741c90e15959c7f131604f11c0db39ab14d97f8ac458ce8664879fe8"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.813468 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xjt7m" event={"ID":"16626919-d38e-4cb5-8661-1b8e78e3967f","Type":"ContainerStarted","Data":"d0ac0b5024c0b33e80125d8427c0f452755535012176a676afd235daa737e3b6"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.827496 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ebaf-account-create-update-vxgmw" event={"ID":"24a0bc3e-b389-4f0a-8a31-40869b3a3e77","Type":"ContainerStarted","Data":"01ee078aee0f2616f6cd432d2dd4de5149af707b01591ec5349725b28878f307"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.835708 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-22bvq" event={"ID":"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1","Type":"ContainerStarted","Data":"d1ea96a2ed17e509c3e6221f827e3118d7ce8000626475d191770f8118cd86ff"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.840173 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-1694-account-create-update-vsgkb" podStartSLOduration=4.840151027 podStartE2EDuration="4.840151027s" podCreationTimestamp="2026-02-16 15:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.819412479 +0000 UTC m=+1222.511081518" watchObservedRunningTime="2026-02-16 15:13:16.840151027 +0000 UTC m=+1222.531820066" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.845279 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" podStartSLOduration=4.845265352 podStartE2EDuration="4.845265352s" podCreationTimestamp="2026-02-16 15:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.83822799 +0000 UTC m=+1222.529897029" watchObservedRunningTime="2026-02-16 15:13:16.845265352 +0000 UTC m=+1222.536934401" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.859850 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-xjt7m" podStartSLOduration=5.859835439 podStartE2EDuration="5.859835439s" podCreationTimestamp="2026-02-16 15:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.854261452 +0000 UTC m=+1222.545930501" watchObservedRunningTime="2026-02-16 15:13:16.859835439 +0000 UTC m=+1222.551504478" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.862137 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59h2b" event={"ID":"1d1263a5-3517-4ba9-ad62-f35c7a6220c6","Type":"ContainerStarted","Data":"cb1e4f81e20336caa7c87ffce0542bfa4b87765ea78a307912a97bd190f695da"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.888977 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-t2thf" podStartSLOduration=4.888956542 podStartE2EDuration="4.888956542s" podCreationTimestamp="2026-02-16 15:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.867749263 +0000 UTC m=+1222.559418302" watchObservedRunningTime="2026-02-16 15:13:16.888956542 +0000 UTC m=+1222.580625581" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.913876 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-ebaf-account-create-update-vxgmw" podStartSLOduration=5.913853112 podStartE2EDuration="5.913853112s" podCreationTimestamp="2026-02-16 15:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.897361998 +0000 UTC m=+1222.589031037" watchObservedRunningTime="2026-02-16 15:13:16.913853112 +0000 UTC m=+1222.605522151" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.916975 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-59h2b" podStartSLOduration=4.916960508 podStartE2EDuration="4.916960508s" podCreationTimestamp="2026-02-16 15:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.915088122 +0000 UTC m=+1222.606757161" watchObservedRunningTime="2026-02-16 15:13:16.916960508 +0000 UTC m=+1222.608629547" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:16.941331 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-create-22bvq" podStartSLOduration=5.941309165 podStartE2EDuration="5.941309165s" podCreationTimestamp="2026-02-16 15:13:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:16.932419137 +0000 UTC m=+1222.624088176" watchObservedRunningTime="2026-02-16 15:13:16.941309165 +0000 UTC m=+1222.632978204" Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.871827 4748 generic.go:334] "Generic (PLEG): container finished" podID="24a0bc3e-b389-4f0a-8a31-40869b3a3e77" containerID="01ee078aee0f2616f6cd432d2dd4de5149af707b01591ec5349725b28878f307" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.871885 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ebaf-account-create-update-vxgmw" event={"ID":"24a0bc3e-b389-4f0a-8a31-40869b3a3e77","Type":"ContainerDied","Data":"01ee078aee0f2616f6cd432d2dd4de5149af707b01591ec5349725b28878f307"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.873562 4748 generic.go:334] "Generic (PLEG): container finished" podID="e25a5cff-5c50-4edf-a118-08d53e0f1cdf" containerID="7bb8057d1ef92bbd9aa707b302d7b8367d26d18ebb6e07b6226bbb1e20734d54" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.873593 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1694-account-create-update-vsgkb" event={"ID":"e25a5cff-5c50-4edf-a118-08d53e0f1cdf","Type":"ContainerDied","Data":"7bb8057d1ef92bbd9aa707b302d7b8367d26d18ebb6e07b6226bbb1e20734d54"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.874620 4748 generic.go:334] "Generic (PLEG): container finished" podID="b1df7ac4-4495-45a5-bf86-2dda09e6f9b1" containerID="d1ea96a2ed17e509c3e6221f827e3118d7ce8000626475d191770f8118cd86ff" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.874652 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-22bvq" event={"ID":"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1","Type":"ContainerDied","Data":"d1ea96a2ed17e509c3e6221f827e3118d7ce8000626475d191770f8118cd86ff"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.875933 4748 generic.go:334] "Generic (PLEG): container finished" podID="dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a" containerID="f6f7fd1335c444fd75ce4057f57c3a8da08a32c71f88e1483fef747317a52e62" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.875973 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2thf" event={"ID":"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a","Type":"ContainerDied","Data":"f6f7fd1335c444fd75ce4057f57c3a8da08a32c71f88e1483fef747317a52e62"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.877046 4748 generic.go:334] "Generic (PLEG): container finished" podID="16626919-d38e-4cb5-8661-1b8e78e3967f" containerID="d0ac0b5024c0b33e80125d8427c0f452755535012176a676afd235daa737e3b6" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.877078 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xjt7m" event={"ID":"16626919-d38e-4cb5-8661-1b8e78e3967f","Type":"ContainerDied","Data":"d0ac0b5024c0b33e80125d8427c0f452755535012176a676afd235daa737e3b6"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.878077 4748 generic.go:334] "Generic (PLEG): container finished" podID="dc357533-40d6-4300-982f-dcfc7f6219db" containerID="42ebc32d0d54065ae1bdf3381a3da1ce50b5807aafce279f13f8e1002ff71a6c" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.878110 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34d8-account-create-update-gcgxs" event={"ID":"dc357533-40d6-4300-982f-dcfc7f6219db","Type":"ContainerDied","Data":"42ebc32d0d54065ae1bdf3381a3da1ce50b5807aafce279f13f8e1002ff71a6c"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.907901 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"dbee6c044a65360a66d1ce3f0bf25b82cdc176e2093ea9f3cbc96a4561034324"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.913222 4748 generic.go:334] "Generic (PLEG): container finished" podID="1d1263a5-3517-4ba9-ad62-f35c7a6220c6" containerID="cb1e4f81e20336caa7c87ffce0542bfa4b87765ea78a307912a97bd190f695da" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.913265 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59h2b" event={"ID":"1d1263a5-3517-4ba9-ad62-f35c7a6220c6","Type":"ContainerDied","Data":"cb1e4f81e20336caa7c87ffce0542bfa4b87765ea78a307912a97bd190f695da"} Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.914958 4748 generic.go:334] "Generic (PLEG): container finished" podID="8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3" containerID="2554e6f0741c90e15959c7f131604f11c0db39ab14d97f8ac458ce8664879fe8" exitCode=0 Feb 16 15:13:17 crc kubenswrapper[4748]: I0216 15:13:17.914997 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" event={"ID":"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3","Type":"ContainerDied","Data":"2554e6f0741c90e15959c7f131604f11c0db39ab14d97f8ac458ce8664879fe8"} Feb 16 15:13:18 crc kubenswrapper[4748]: I0216 15:13:18.929892 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"270b1582439adc06500dd5650b8c80915b8f7829474ef5808bfc53da2f3d21cf"} Feb 16 15:13:18 crc kubenswrapper[4748]: I0216 15:13:18.929933 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"beaa6c9eab18adebcaf7d2b3af894bd01bdc8192f137fa7bfc4a1ad647dd6103"} Feb 16 15:13:19 crc kubenswrapper[4748]: I0216 15:13:19.158046 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 16 15:13:19 crc kubenswrapper[4748]: I0216 15:13:19.941435 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3bc07b5c-b3d3-4ba1-b580-30e09261edab","Type":"ContainerStarted","Data":"2691f52b6fa73f819fcaae447b3112b04883c9a875b24b2eebc8741eaa4935a5"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.282485 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.298086 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.311121 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.312357 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d86qq\" (UniqueName: \"kubernetes.io/projected/16626919-d38e-4cb5-8661-1b8e78e3967f-kube-api-access-d86qq\") pod \"16626919-d38e-4cb5-8661-1b8e78e3967f\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.312473 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16626919-d38e-4cb5-8661-1b8e78e3967f-operator-scripts\") pod \"16626919-d38e-4cb5-8661-1b8e78e3967f\" (UID: \"16626919-d38e-4cb5-8661-1b8e78e3967f\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.312958 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16626919-d38e-4cb5-8661-1b8e78e3967f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "16626919-d38e-4cb5-8661-1b8e78e3967f" (UID: "16626919-d38e-4cb5-8661-1b8e78e3967f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.313097 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/16626919-d38e-4cb5-8661-1b8e78e3967f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.340605 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.342354 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16626919-d38e-4cb5-8661-1b8e78e3967f-kube-api-access-d86qq" (OuterVolumeSpecName: "kube-api-access-d86qq") pod "16626919-d38e-4cb5-8661-1b8e78e3967f" (UID: "16626919-d38e-4cb5-8661-1b8e78e3967f"). InnerVolumeSpecName "kube-api-access-d86qq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.415734 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-operator-scripts\") pod \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.415782 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-operator-scripts\") pod \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.415811 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkpkw\" (UniqueName: \"kubernetes.io/projected/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-kube-api-access-qkpkw\") pod \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.415833 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpbmk\" (UniqueName: \"kubernetes.io/projected/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-kube-api-access-rpbmk\") pod \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\" (UID: \"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.416032 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6p9g\" (UniqueName: \"kubernetes.io/projected/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-kube-api-access-x6p9g\") pod \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\" (UID: \"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.416050 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-operator-scripts\") pod \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\" (UID: \"24a0bc3e-b389-4f0a-8a31-40869b3a3e77\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.416409 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d86qq\" (UniqueName: \"kubernetes.io/projected/16626919-d38e-4cb5-8661-1b8e78e3967f-kube-api-access-d86qq\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.416404 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3" (UID: "8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.416472 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1df7ac4-4495-45a5-bf86-2dda09e6f9b1" (UID: "b1df7ac4-4495-45a5-bf86-2dda09e6f9b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.416932 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24a0bc3e-b389-4f0a-8a31-40869b3a3e77" (UID: "24a0bc3e-b389-4f0a-8a31-40869b3a3e77"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.420881 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-kube-api-access-qkpkw" (OuterVolumeSpecName: "kube-api-access-qkpkw") pod "24a0bc3e-b389-4f0a-8a31-40869b3a3e77" (UID: "24a0bc3e-b389-4f0a-8a31-40869b3a3e77"). InnerVolumeSpecName "kube-api-access-qkpkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.421890 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-kube-api-access-x6p9g" (OuterVolumeSpecName: "kube-api-access-x6p9g") pod "b1df7ac4-4495-45a5-bf86-2dda09e6f9b1" (UID: "b1df7ac4-4495-45a5-bf86-2dda09e6f9b1"). InnerVolumeSpecName "kube-api-access-x6p9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.423155 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-kube-api-access-rpbmk" (OuterVolumeSpecName: "kube-api-access-rpbmk") pod "8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3" (UID: "8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3"). InnerVolumeSpecName "kube-api-access-rpbmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.426093 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.438529 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.466114 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.498487 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518052 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-operator-scripts\") pod \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518132 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-operator-scripts\") pod \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518233 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-524zz\" (UniqueName: \"kubernetes.io/projected/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-kube-api-access-524zz\") pod \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\" (UID: \"1d1263a5-3517-4ba9-ad62-f35c7a6220c6\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518313 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc357533-40d6-4300-982f-dcfc7f6219db-operator-scripts\") pod \"dc357533-40d6-4300-982f-dcfc7f6219db\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518350 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kf4s5\" (UniqueName: \"kubernetes.io/projected/dc357533-40d6-4300-982f-dcfc7f6219db-kube-api-access-kf4s5\") pod \"dc357533-40d6-4300-982f-dcfc7f6219db\" (UID: \"dc357533-40d6-4300-982f-dcfc7f6219db\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518383 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxtk9\" (UniqueName: \"kubernetes.io/projected/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-kube-api-access-jxtk9\") pod \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\" (UID: \"e25a5cff-5c50-4edf-a118-08d53e0f1cdf\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518550 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e25a5cff-5c50-4edf-a118-08d53e0f1cdf" (UID: "e25a5cff-5c50-4edf-a118-08d53e0f1cdf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518586 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d1263a5-3517-4ba9-ad62-f35c7a6220c6" (UID: "1d1263a5-3517-4ba9-ad62-f35c7a6220c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518970 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.518997 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.519010 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkpkw\" (UniqueName: \"kubernetes.io/projected/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-kube-api-access-qkpkw\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.519024 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpbmk\" (UniqueName: \"kubernetes.io/projected/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3-kube-api-access-rpbmk\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.519039 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.519050 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.519062 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6p9g\" (UniqueName: \"kubernetes.io/projected/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1-kube-api-access-x6p9g\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.519075 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24a0bc3e-b389-4f0a-8a31-40869b3a3e77-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.520104 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc357533-40d6-4300-982f-dcfc7f6219db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc357533-40d6-4300-982f-dcfc7f6219db" (UID: "dc357533-40d6-4300-982f-dcfc7f6219db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.523171 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-kube-api-access-524zz" (OuterVolumeSpecName: "kube-api-access-524zz") pod "1d1263a5-3517-4ba9-ad62-f35c7a6220c6" (UID: "1d1263a5-3517-4ba9-ad62-f35c7a6220c6"). InnerVolumeSpecName "kube-api-access-524zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.524536 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc357533-40d6-4300-982f-dcfc7f6219db-kube-api-access-kf4s5" (OuterVolumeSpecName: "kube-api-access-kf4s5") pod "dc357533-40d6-4300-982f-dcfc7f6219db" (UID: "dc357533-40d6-4300-982f-dcfc7f6219db"). InnerVolumeSpecName "kube-api-access-kf4s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.527390 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-kube-api-access-jxtk9" (OuterVolumeSpecName: "kube-api-access-jxtk9") pod "e25a5cff-5c50-4edf-a118-08d53e0f1cdf" (UID: "e25a5cff-5c50-4edf-a118-08d53e0f1cdf"). InnerVolumeSpecName "kube-api-access-jxtk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.619941 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-operator-scripts\") pod \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.620085 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbsdh\" (UniqueName: \"kubernetes.io/projected/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-kube-api-access-wbsdh\") pod \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\" (UID: \"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a\") " Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.620491 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxtk9\" (UniqueName: \"kubernetes.io/projected/e25a5cff-5c50-4edf-a118-08d53e0f1cdf-kube-api-access-jxtk9\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.620560 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-524zz\" (UniqueName: \"kubernetes.io/projected/1d1263a5-3517-4ba9-ad62-f35c7a6220c6-kube-api-access-524zz\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.620619 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc357533-40d6-4300-982f-dcfc7f6219db-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.620673 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kf4s5\" (UniqueName: \"kubernetes.io/projected/dc357533-40d6-4300-982f-dcfc7f6219db-kube-api-access-kf4s5\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.621164 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a" (UID: "dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.643924 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-kube-api-access-wbsdh" (OuterVolumeSpecName: "kube-api-access-wbsdh") pod "dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a" (UID: "dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a"). InnerVolumeSpecName "kube-api-access-wbsdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.723055 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.723100 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbsdh\" (UniqueName: \"kubernetes.io/projected/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a-kube-api-access-wbsdh\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.967633 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz5pf" event={"ID":"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e","Type":"ContainerStarted","Data":"acc14d26be1aec97d56e5f4958a6785b381e7e517e4c636d671d09ec2848da4a"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.969878 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.969981 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-6b27-account-create-update-nvlxz" event={"ID":"8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3","Type":"ContainerDied","Data":"09ef48d5218dd772e2cf727c774a1ffe3d13ba8d855ec412e203220401140fa4"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.970019 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ef48d5218dd772e2cf727c774a1ffe3d13ba8d855ec412e203220401140fa4" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.971472 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xjt7m" event={"ID":"16626919-d38e-4cb5-8661-1b8e78e3967f","Type":"ContainerDied","Data":"cf7a203cfaf07d857606b658f0c92b861109cc28b77b8f3b49bd8f01b3160aef"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.971530 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf7a203cfaf07d857606b658f0c92b861109cc28b77b8f3b49bd8f01b3160aef" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.971631 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xjt7m" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.972830 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-1694-account-create-update-vsgkb" event={"ID":"e25a5cff-5c50-4edf-a118-08d53e0f1cdf","Type":"ContainerDied","Data":"ab3a18520f88857c273637e68c2d53980ba946db284526ec0805273ec97b4166"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.972855 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3a18520f88857c273637e68c2d53980ba946db284526ec0805273ec97b4166" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.972898 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-1694-account-create-update-vsgkb" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.982670 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-22bvq" event={"ID":"b1df7ac4-4495-45a5-bf86-2dda09e6f9b1","Type":"ContainerDied","Data":"c62431557999bcbe4d6eb792ca975e846e45eb214ba544450baa5fda32e0d1ea"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.982740 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c62431557999bcbe4d6eb792ca975e846e45eb214ba544450baa5fda32e0d1ea" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.982800 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-22bvq" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.985114 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-t2thf" event={"ID":"dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a","Type":"ContainerDied","Data":"27b5ac4d7d4c410a4c7c784d43e3f4155d571875f3d0a11f5ceed7d9422188de"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.985172 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27b5ac4d7d4c410a4c7c784d43e3f4155d571875f3d0a11f5ceed7d9422188de" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.985233 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-t2thf" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.995661 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-dz5pf" podStartSLOduration=4.579998685 podStartE2EDuration="10.995639152s" podCreationTimestamp="2026-02-16 15:13:12 +0000 UTC" firstStartedPulling="2026-02-16 15:13:15.621247067 +0000 UTC m=+1221.312916116" lastFinishedPulling="2026-02-16 15:13:22.036887544 +0000 UTC m=+1227.728556583" observedRunningTime="2026-02-16 15:13:22.986924068 +0000 UTC m=+1228.678593117" watchObservedRunningTime="2026-02-16 15:13:22.995639152 +0000 UTC m=+1228.687308191" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.996017 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59h2b" Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.996864 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59h2b" event={"ID":"1d1263a5-3517-4ba9-ad62-f35c7a6220c6","Type":"ContainerDied","Data":"54293ee14d17962823045961cc2b045ade70701d240baf870ac40817f84fe4b5"} Feb 16 15:13:22 crc kubenswrapper[4748]: I0216 15:13:22.996931 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54293ee14d17962823045961cc2b045ade70701d240baf870ac40817f84fe4b5" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.003426 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-34d8-account-create-update-gcgxs" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.022337 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-ebaf-account-create-update-vxgmw" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.030264 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-34d8-account-create-update-gcgxs" event={"ID":"dc357533-40d6-4300-982f-dcfc7f6219db","Type":"ContainerDied","Data":"b4d3e4c0731d01067d0544886d9bf19521495313c31f53fbef551175259c83d2"} Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.030321 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4d3e4c0731d01067d0544886d9bf19521495313c31f53fbef551175259c83d2" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.030613 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-ebaf-account-create-update-vxgmw" event={"ID":"24a0bc3e-b389-4f0a-8a31-40869b3a3e77","Type":"ContainerDied","Data":"d6acb02bdf5d64554cbb682f26f8691bf61b2f6bd244eee37bf276301934901e"} Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.030629 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6acb02bdf5d64554cbb682f26f8691bf61b2f6bd244eee37bf276301934901e" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.031844 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"b0274a7d39a66fa9ce3c12702bfa3481bb47f21d1ef4103a1b49dd5d874d3a92"} Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.032295 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"5bb3a1afdb7a06e33bf5458b8d397304e7e29279c7cca8c4326e0fda4cb9d759"} Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.032311 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3c460294-3cc7-4770-9a8a-0bd7c2b8fad2","Type":"ContainerStarted","Data":"0e40954939d0c14ece9e3747eccf320cd4bdd6aa9fb57073c9abfd4f09d07502"} Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.087948 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=28.127934396 podStartE2EDuration="47.087925002s" podCreationTimestamp="2026-02-16 15:12:36 +0000 UTC" firstStartedPulling="2026-02-16 15:12:56.713274456 +0000 UTC m=+1202.404943505" lastFinishedPulling="2026-02-16 15:13:15.673265072 +0000 UTC m=+1221.364934111" observedRunningTime="2026-02-16 15:13:23.068443264 +0000 UTC m=+1228.760112323" watchObservedRunningTime="2026-02-16 15:13:23.087925002 +0000 UTC m=+1228.779594061" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.328672 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nc9jq"] Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329109 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d1263a5-3517-4ba9-ad62-f35c7a6220c6" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329124 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d1263a5-3517-4ba9-ad62-f35c7a6220c6" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329144 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24a0bc3e-b389-4f0a-8a31-40869b3a3e77" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329154 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="24a0bc3e-b389-4f0a-8a31-40869b3a3e77" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329166 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16626919-d38e-4cb5-8661-1b8e78e3967f" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329174 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="16626919-d38e-4cb5-8661-1b8e78e3967f" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329188 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329196 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329214 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329222 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329235 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc357533-40d6-4300-982f-dcfc7f6219db" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329243 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc357533-40d6-4300-982f-dcfc7f6219db" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329274 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1df7ac4-4495-45a5-bf86-2dda09e6f9b1" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329284 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1df7ac4-4495-45a5-bf86-2dda09e6f9b1" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: E0216 15:13:23.329308 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e25a5cff-5c50-4edf-a118-08d53e0f1cdf" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329315 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="e25a5cff-5c50-4edf-a118-08d53e0f1cdf" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329514 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="e25a5cff-5c50-4edf-a118-08d53e0f1cdf" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329539 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329551 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc357533-40d6-4300-982f-dcfc7f6219db" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329572 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="16626919-d38e-4cb5-8661-1b8e78e3967f" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329588 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d1263a5-3517-4ba9-ad62-f35c7a6220c6" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329604 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329616 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1df7ac4-4495-45a5-bf86-2dda09e6f9b1" containerName="mariadb-database-create" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.329630 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="24a0bc3e-b389-4f0a-8a31-40869b3a3e77" containerName="mariadb-account-create-update" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.330808 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.335049 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.355060 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nc9jq"] Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.435922 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.436013 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-config\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.436048 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhkg\" (UniqueName: \"kubernetes.io/projected/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-kube-api-access-szhkg\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.436066 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.436173 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.436220 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.537364 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-config\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.537415 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szhkg\" (UniqueName: \"kubernetes.io/projected/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-kube-api-access-szhkg\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.537434 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.537489 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.537515 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.537568 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.538199 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-config\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.538202 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.538559 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-svc\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.538834 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.538838 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.555057 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szhkg\" (UniqueName: \"kubernetes.io/projected/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-kube-api-access-szhkg\") pod \"dnsmasq-dns-764c5664d7-nc9jq\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:23 crc kubenswrapper[4748]: I0216 15:13:23.652021 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:24 crc kubenswrapper[4748]: I0216 15:13:24.189742 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nc9jq"] Feb 16 15:13:24 crc kubenswrapper[4748]: W0216 15:13:24.190665 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6797935_76f7_42dd_9f6d_63f6f63ff9bf.slice/crio-1a92d3e50ef44793c6047b5f52801e07c6edb8bc2019968021caba90271525d5 WatchSource:0}: Error finding container 1a92d3e50ef44793c6047b5f52801e07c6edb8bc2019968021caba90271525d5: Status 404 returned error can't find the container with id 1a92d3e50ef44793c6047b5f52801e07c6edb8bc2019968021caba90271525d5 Feb 16 15:13:24 crc kubenswrapper[4748]: E0216 15:13:24.532751 4748 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6797935_76f7_42dd_9f6d_63f6f63ff9bf.slice/crio-conmon-70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:13:25 crc kubenswrapper[4748]: I0216 15:13:25.053338 4748 generic.go:334] "Generic (PLEG): container finished" podID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerID="70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c" exitCode=0 Feb 16 15:13:25 crc kubenswrapper[4748]: I0216 15:13:25.053411 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" event={"ID":"d6797935-76f7-42dd-9f6d-63f6f63ff9bf","Type":"ContainerDied","Data":"70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c"} Feb 16 15:13:25 crc kubenswrapper[4748]: I0216 15:13:25.053649 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" event={"ID":"d6797935-76f7-42dd-9f6d-63f6f63ff9bf","Type":"ContainerStarted","Data":"1a92d3e50ef44793c6047b5f52801e07c6edb8bc2019968021caba90271525d5"} Feb 16 15:13:25 crc kubenswrapper[4748]: I0216 15:13:25.056960 4748 generic.go:334] "Generic (PLEG): container finished" podID="a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" containerID="29119b67c71cde604d33987ff5de6501dc5d0f1b8613233bcb4868750fbc7699" exitCode=0 Feb 16 15:13:25 crc kubenswrapper[4748]: I0216 15:13:25.057006 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zpx" event={"ID":"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe","Type":"ContainerDied","Data":"29119b67c71cde604d33987ff5de6501dc5d0f1b8613233bcb4868750fbc7699"} Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.068405 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" event={"ID":"d6797935-76f7-42dd-9f6d-63f6f63ff9bf","Type":"ContainerStarted","Data":"14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6"} Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.068914 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.070287 4748 generic.go:334] "Generic (PLEG): container finished" podID="3bc07b5c-b3d3-4ba1-b580-30e09261edab" containerID="2691f52b6fa73f819fcaae447b3112b04883c9a875b24b2eebc8741eaa4935a5" exitCode=0 Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.070361 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3bc07b5c-b3d3-4ba1-b580-30e09261edab","Type":"ContainerDied","Data":"2691f52b6fa73f819fcaae447b3112b04883c9a875b24b2eebc8741eaa4935a5"} Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.071841 4748 generic.go:334] "Generic (PLEG): container finished" podID="87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" containerID="acc14d26be1aec97d56e5f4958a6785b381e7e517e4c636d671d09ec2848da4a" exitCode=0 Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.071966 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz5pf" event={"ID":"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e","Type":"ContainerDied","Data":"acc14d26be1aec97d56e5f4958a6785b381e7e517e4c636d671d09ec2848da4a"} Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.132988 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" podStartSLOduration=3.132969019 podStartE2EDuration="3.132969019s" podCreationTimestamp="2026-02-16 15:13:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:26.100188376 +0000 UTC m=+1231.791857415" watchObservedRunningTime="2026-02-16 15:13:26.132969019 +0000 UTC m=+1231.824638058" Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.819001 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zpx" Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.904337 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-db-sync-config-data\") pod \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.904427 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-combined-ca-bundle\") pod \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.904578 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-config-data\") pod \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.904661 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4w2r\" (UniqueName: \"kubernetes.io/projected/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-kube-api-access-x4w2r\") pod \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\" (UID: \"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe\") " Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.908724 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" (UID: "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.908743 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-kube-api-access-x4w2r" (OuterVolumeSpecName: "kube-api-access-x4w2r") pod "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" (UID: "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe"). InnerVolumeSpecName "kube-api-access-x4w2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.934125 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" (UID: "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:26 crc kubenswrapper[4748]: I0216 15:13:26.949893 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-config-data" (OuterVolumeSpecName: "config-data") pod "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" (UID: "a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.011342 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.011415 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.011427 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4w2r\" (UniqueName: \"kubernetes.io/projected/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-kube-api-access-x4w2r\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.011441 4748 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.090332 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-b2zpx" event={"ID":"a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe","Type":"ContainerDied","Data":"cb563fbd679c91529591fd04b9c99488e31cbb227f5cffa32fb45e826ec61182"} Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.090376 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb563fbd679c91529591fd04b9c99488e31cbb227f5cffa32fb45e826ec61182" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.090376 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-b2zpx" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.093530 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3bc07b5c-b3d3-4ba1-b580-30e09261edab","Type":"ContainerStarted","Data":"213f05742a6fd437700332f520e05f76f79776a827a85541d8fb9f8e3ef523f6"} Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.475167 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nc9jq"] Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.525649 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d2n8b"] Feb 16 15:13:27 crc kubenswrapper[4748]: E0216 15:13:27.526261 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" containerName="glance-db-sync" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.526399 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" containerName="glance-db-sync" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.526814 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" containerName="glance-db-sync" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.528273 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.555685 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d2n8b"] Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.622117 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.622164 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-config\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.622247 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.622283 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.622316 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94ln\" (UniqueName: \"kubernetes.io/projected/9d2cfbc5-6234-4d16-b3e8-db42c311a912-kube-api-access-c94ln\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.622345 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.628858 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.723909 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7zhc\" (UniqueName: \"kubernetes.io/projected/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-kube-api-access-l7zhc\") pod \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724050 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-config-data\") pod \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724105 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-combined-ca-bundle\") pod \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\" (UID: \"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e\") " Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724378 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94ln\" (UniqueName: \"kubernetes.io/projected/9d2cfbc5-6234-4d16-b3e8-db42c311a912-kube-api-access-c94ln\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724431 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724481 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724499 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-config\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724572 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.724607 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.725625 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.727060 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.727203 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-config\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.727204 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.728783 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.742953 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94ln\" (UniqueName: \"kubernetes.io/projected/9d2cfbc5-6234-4d16-b3e8-db42c311a912-kube-api-access-c94ln\") pod \"dnsmasq-dns-74f6bcbc87-d2n8b\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.750023 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-kube-api-access-l7zhc" (OuterVolumeSpecName: "kube-api-access-l7zhc") pod "87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" (UID: "87ed93e3-0fa2-48ec-a2e8-2e371daaa93e"). InnerVolumeSpecName "kube-api-access-l7zhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.754470 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" (UID: "87ed93e3-0fa2-48ec-a2e8-2e371daaa93e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.783817 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-config-data" (OuterVolumeSpecName: "config-data") pod "87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" (UID: "87ed93e3-0fa2-48ec-a2e8-2e371daaa93e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.826737 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7zhc\" (UniqueName: \"kubernetes.io/projected/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-kube-api-access-l7zhc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.826772 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.826782 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:27 crc kubenswrapper[4748]: I0216 15:13:27.941597 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.107220 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-dz5pf" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.107207 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-dz5pf" event={"ID":"87ed93e3-0fa2-48ec-a2e8-2e371daaa93e","Type":"ContainerDied","Data":"414c3a8dc5a4ce9f7ae1fe3b788de6278422f85ffa4fe57cec98cef671658632"} Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.107285 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="414c3a8dc5a4ce9f7ae1fe3b788de6278422f85ffa4fe57cec98cef671658632" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.107313 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" podUID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerName="dnsmasq-dns" containerID="cri-o://14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6" gracePeriod=10 Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.375516 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d2n8b"] Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.415143 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7dqg8"] Feb 16 15:13:28 crc kubenswrapper[4748]: E0216 15:13:28.415548 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" containerName="keystone-db-sync" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.415565 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" containerName="keystone-db-sync" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.415764 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" containerName="keystone-db-sync" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.416732 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.450835 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-w4qpk"] Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.452083 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.459335 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.459671 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.459890 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.460053 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cd9jb" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.460261 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.503060 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7dqg8"] Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546063 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-credential-keys\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546121 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546151 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958ml\" (UniqueName: \"kubernetes.io/projected/2c4a81e3-35af-43e8-aef9-04c03c01bba1-kube-api-access-958ml\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546179 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546207 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-fernet-keys\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546246 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682kn\" (UniqueName: \"kubernetes.io/projected/2edd4f60-d486-4690-84a1-768479ff2749-kube-api-access-682kn\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546277 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546307 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-scripts\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546340 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546361 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546390 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-config\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.546413 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-config-data\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.561869 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d2n8b"] Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.575782 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w4qpk"] Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.648224 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.648506 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-config\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.648637 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-config-data\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.648778 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-credential-keys\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.648873 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.648973 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-958ml\" (UniqueName: \"kubernetes.io/projected/2c4a81e3-35af-43e8-aef9-04c03c01bba1-kube-api-access-958ml\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.649071 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.649185 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-fernet-keys\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.649331 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-682kn\" (UniqueName: \"kubernetes.io/projected/2edd4f60-d486-4690-84a1-768479ff2749-kube-api-access-682kn\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.649448 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.649544 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-scripts\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.649661 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.654769 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-svc\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.655263 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-config\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.663559 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.664478 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.665017 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.667914 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-scripts\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.668426 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.672315 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-credential-keys\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.676076 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-config-data\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.687215 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-fernet-keys\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.804468 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-682kn\" (UniqueName: \"kubernetes.io/projected/2edd4f60-d486-4690-84a1-768479ff2749-kube-api-access-682kn\") pod \"keystone-bootstrap-w4qpk\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.829785 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.919553 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.920035 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jj48w"] Feb 16 15:13:28 crc kubenswrapper[4748]: E0216 15:13:28.920650 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerName="dnsmasq-dns" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.920668 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerName="dnsmasq-dns" Feb 16 15:13:28 crc kubenswrapper[4748]: E0216 15:13:28.920681 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerName="init" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.920689 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerName="init" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.921012 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerName="dnsmasq-dns" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.941678 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.955087 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.955303 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:13:28 crc kubenswrapper[4748]: I0216 15:13:28.955474 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-47qwf" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:28.994699 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-8skr8"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.010917 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jj48w"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.002095 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-swift-storage-0\") pod \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.011861 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-nb\") pod \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.011933 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-config\") pod \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.011986 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szhkg\" (UniqueName: \"kubernetes.io/projected/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-kube-api-access-szhkg\") pod \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.012033 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-sb\") pod \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.012087 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-svc\") pod \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\" (UID: \"d6797935-76f7-42dd-9f6d-63f6f63ff9bf\") " Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.023911 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.040259 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-zrtkd" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.040572 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.040693 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.040693 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.098001 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7tc2j"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.099299 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.105243 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z5647" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.105691 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120553 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/67fe68e5-f0bc-406d-8880-9c39649848de-certs\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120610 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-combined-ca-bundle\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120676 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-config\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120757 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhpm6\" (UniqueName: \"kubernetes.io/projected/67fe68e5-f0bc-406d-8880-9c39649848de-kube-api-access-jhpm6\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120794 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-scripts\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120830 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-combined-ca-bundle\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120849 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl478\" (UniqueName: \"kubernetes.io/projected/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-kube-api-access-xl478\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.120865 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-config-data\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.121827 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-8skr8"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.146984 4748 generic.go:334] "Generic (PLEG): container finished" podID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" containerID="14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6" exitCode=0 Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.147089 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" event={"ID":"d6797935-76f7-42dd-9f6d-63f6f63ff9bf","Type":"ContainerDied","Data":"14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6"} Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.147118 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" event={"ID":"d6797935-76f7-42dd-9f6d-63f6f63ff9bf","Type":"ContainerDied","Data":"1a92d3e50ef44793c6047b5f52801e07c6edb8bc2019968021caba90271525d5"} Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.147141 4748 scope.go:117] "RemoveContainer" containerID="14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.147343 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-nc9jq" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.151004 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" event={"ID":"9d2cfbc5-6234-4d16-b3e8-db42c311a912","Type":"ContainerStarted","Data":"188a88a662b0f753c7cf5fe729196f04211416abe2cf496e7373968aec014dc3"} Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.155738 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7tc2j"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.185901 4748 scope.go:117] "RemoveContainer" containerID="70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.223312 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-958ml\" (UniqueName: \"kubernetes.io/projected/2c4a81e3-35af-43e8-aef9-04c03c01bba1-kube-api-access-958ml\") pod \"dnsmasq-dns-847c4cc679-7dqg8\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225063 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlmfg\" (UniqueName: \"kubernetes.io/projected/011c3199-3e59-4794-ab44-de1abe4675a0-kube-api-access-dlmfg\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225116 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/67fe68e5-f0bc-406d-8880-9c39649848de-certs\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225139 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-combined-ca-bundle\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225199 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-config\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225244 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-combined-ca-bundle\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225276 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhpm6\" (UniqueName: \"kubernetes.io/projected/67fe68e5-f0bc-406d-8880-9c39649848de-kube-api-access-jhpm6\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225310 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-scripts\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225335 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-combined-ca-bundle\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225360 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl478\" (UniqueName: \"kubernetes.io/projected/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-kube-api-access-xl478\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225380 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-db-sync-config-data\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.225403 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-config-data\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.244274 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-kube-api-access-szhkg" (OuterVolumeSpecName: "kube-api-access-szhkg") pod "d6797935-76f7-42dd-9f6d-63f6f63ff9bf" (UID: "d6797935-76f7-42dd-9f6d-63f6f63ff9bf"). InnerVolumeSpecName "kube-api-access-szhkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.247646 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-5n8n8"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.258765 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.276261 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.276461 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v2zdh" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.276573 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.276566 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/67fe68e5-f0bc-406d-8880-9c39649848de-certs\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.280143 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-scripts\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.297202 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-combined-ca-bundle\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.299436 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-combined-ca-bundle\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.325062 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67fe68e5-f0bc-406d-8880-9c39649848de-config-data\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.325693 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-config\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.327045 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-db-sync-config-data\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.327198 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlmfg\" (UniqueName: \"kubernetes.io/projected/011c3199-3e59-4794-ab44-de1abe4675a0-kube-api-access-dlmfg\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.327355 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-combined-ca-bundle\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.327517 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szhkg\" (UniqueName: \"kubernetes.io/projected/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-kube-api-access-szhkg\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.333878 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5n8n8"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.353949 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl478\" (UniqueName: \"kubernetes.io/projected/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-kube-api-access-xl478\") pod \"neutron-db-sync-jj48w\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.356422 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhpm6\" (UniqueName: \"kubernetes.io/projected/67fe68e5-f0bc-406d-8880-9c39649848de-kube-api-access-jhpm6\") pod \"cloudkitty-db-sync-8skr8\" (UID: \"67fe68e5-f0bc-406d-8880-9c39649848de\") " pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.357923 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.358368 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-db-sync-config-data\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.358957 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.436285 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlmfg\" (UniqueName: \"kubernetes.io/projected/011c3199-3e59-4794-ab44-de1abe4675a0-kube-api-access-dlmfg\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.441964 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.444286 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.449695 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-combined-ca-bundle\") pod \"barbican-db-sync-7tc2j\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.457926 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-8skr8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.458132 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-db-sync-config-data\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.458181 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45dh\" (UniqueName: \"kubernetes.io/projected/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-kube-api-access-d45dh\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.458242 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-config-data\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.458264 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-etc-machine-id\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.458377 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-scripts\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.458450 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-combined-ca-bundle\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.471180 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.471899 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.499243 4748 scope.go:117] "RemoveContainer" containerID="14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6" Feb 16 15:13:29 crc kubenswrapper[4748]: E0216 15:13:29.500290 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6\": container with ID starting with 14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6 not found: ID does not exist" containerID="14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.500322 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6"} err="failed to get container status \"14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6\": rpc error: code = NotFound desc = could not find container \"14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6\": container with ID starting with 14e6fe9e5abeb475359c4cad3bc241b84898ec5e4d68ef33999072584a4a34f6 not found: ID does not exist" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.500342 4748 scope.go:117] "RemoveContainer" containerID="70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c" Feb 16 15:13:29 crc kubenswrapper[4748]: E0216 15:13:29.504278 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c\": container with ID starting with 70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c not found: ID does not exist" containerID="70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.504320 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c"} err="failed to get container status \"70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c\": rpc error: code = NotFound desc = could not find container \"70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c\": container with ID starting with 70d9f449b83503ffc2683aef03ae07f8cf0da3252f65ea5ec1ad959c4d1c453c not found: ID does not exist" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560511 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-scripts\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560569 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-run-httpd\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560594 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-combined-ca-bundle\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560665 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb42r\" (UniqueName: \"kubernetes.io/projected/d88fff67-d90f-4c7d-bf0d-711c87006f68-kube-api-access-vb42r\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560693 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-config-data\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560725 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560770 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-db-sync-config-data\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560830 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-scripts\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560847 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-log-httpd\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560872 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45dh\" (UniqueName: \"kubernetes.io/projected/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-kube-api-access-d45dh\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560901 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-etc-machine-id\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560918 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-config-data\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.560941 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.561774 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-etc-machine-id\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.563398 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-config" (OuterVolumeSpecName: "config") pod "d6797935-76f7-42dd-9f6d-63f6f63ff9bf" (UID: "d6797935-76f7-42dd-9f6d-63f6f63ff9bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.570037 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d6797935-76f7-42dd-9f6d-63f6f63ff9bf" (UID: "d6797935-76f7-42dd-9f6d-63f6f63ff9bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.575008 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-combined-ca-bundle\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.587214 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-scripts\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.616207 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-db-sync-config-data\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.617070 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-config-data\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.621908 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.625445 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d6797935-76f7-42dd-9f6d-63f6f63ff9bf" (UID: "d6797935-76f7-42dd-9f6d-63f6f63ff9bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.634078 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45dh\" (UniqueName: \"kubernetes.io/projected/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-kube-api-access-d45dh\") pod \"cinder-db-sync-5n8n8\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663571 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-scripts\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663612 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-log-httpd\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663667 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663762 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-run-httpd\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663859 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb42r\" (UniqueName: \"kubernetes.io/projected/d88fff67-d90f-4c7d-bf0d-711c87006f68-kube-api-access-vb42r\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663896 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-config-data\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663917 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663987 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.663999 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.664011 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.665119 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-log-httpd\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.665216 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-run-httpd\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.673926 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.677675 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-config-data\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.678962 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d6797935-76f7-42dd-9f6d-63f6f63ff9bf" (UID: "d6797935-76f7-42dd-9f6d-63f6f63ff9bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.679155 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d6797935-76f7-42dd-9f6d-63f6f63ff9bf" (UID: "d6797935-76f7-42dd-9f6d-63f6f63ff9bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.679251 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-scripts\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.686618 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.687502 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb42r\" (UniqueName: \"kubernetes.io/projected/d88fff67-d90f-4c7d-bf0d-711c87006f68-kube-api-access-vb42r\") pod \"ceilometer-0\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.705276 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.725032 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.739188 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5jqr7"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.740923 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.742378 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.743862 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nvmt4" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.744439 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.750887 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7dqg8"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.763896 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5jqr7"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.765422 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.765444 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d6797935-76f7-42dd-9f6d-63f6f63ff9bf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.790787 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mll8s"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.794812 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.800926 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mll8s"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.859621 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.879430 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.884292 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-config-data\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.884404 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-combined-ca-bundle\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.884672 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-scripts\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.884829 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3dfe98-47ed-4b69-b461-f0f9185e4697-logs\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.884922 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqd8\" (UniqueName: \"kubernetes.io/projected/1b3dfe98-47ed-4b69-b461-f0f9185e4697-kube-api-access-hvqd8\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.895258 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.902111 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.902458 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dql7h" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.916949 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.971648 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987078 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3dfe98-47ed-4b69-b461-f0f9185e4697-logs\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987142 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqd8\" (UniqueName: \"kubernetes.io/projected/1b3dfe98-47ed-4b69-b461-f0f9185e4697-kube-api-access-hvqd8\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987190 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmbfx\" (UniqueName: \"kubernetes.io/projected/f13e2e6e-7e41-4da9-868f-94d41efe273e-kube-api-access-cmbfx\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987225 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987268 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987318 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987336 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987363 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-config-data\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987384 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-combined-ca-bundle\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987407 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-config\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.987450 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-scripts\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:29 crc kubenswrapper[4748]: I0216 15:13:29.990156 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3dfe98-47ed-4b69-b461-f0f9185e4697-logs\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.000857 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-combined-ca-bundle\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.001746 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-scripts\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.013470 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.015271 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.016660 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-config-data\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.019124 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.031204 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nc9jq"] Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.036230 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqd8\" (UniqueName: \"kubernetes.io/projected/1b3dfe98-47ed-4b69-b461-f0f9185e4697-kube-api-access-hvqd8\") pod \"placement-db-sync-5jqr7\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.043340 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-nc9jq"] Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.059945 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.067971 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5jqr7" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.092908 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-config-data\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.092966 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.092994 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093025 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-logs\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093111 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmbfx\" (UniqueName: \"kubernetes.io/projected/f13e2e6e-7e41-4da9-868f-94d41efe273e-kube-api-access-cmbfx\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093168 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093201 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093239 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093303 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093324 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzgwf\" (UniqueName: \"kubernetes.io/projected/809f2a37-8f13-402b-ac98-8ea753a32978-kube-api-access-tzgwf\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093343 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093369 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-scripts\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.093395 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-config\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.094342 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-config\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.098198 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.099884 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.101667 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.102489 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-w4qpk"] Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.104365 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.146677 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmbfx\" (UniqueName: \"kubernetes.io/projected/f13e2e6e-7e41-4da9-868f-94d41efe273e-kube-api-access-cmbfx\") pod \"dnsmasq-dns-785d8bcb8c-mll8s\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.187825 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3bc07b5c-b3d3-4ba1-b580-30e09261edab","Type":"ContainerStarted","Data":"d0eb0160a03d98b979ae8aad5e05e942572de9cf5ad599e9c57c3f961726bd7c"} Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.188161 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3bc07b5c-b3d3-4ba1-b580-30e09261edab","Type":"ContainerStarted","Data":"95f91ebcb694187ed70bb8534d436fb1dbbdce395c07df105c492e6e85c6a986"} Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.194850 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-config-data\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.194904 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.194938 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.194972 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-logs\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195037 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195063 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195084 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195165 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195192 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzgwf\" (UniqueName: \"kubernetes.io/projected/809f2a37-8f13-402b-ac98-8ea753a32978-kube-api-access-tzgwf\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195459 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195508 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-scripts\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195555 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195582 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzldk\" (UniqueName: \"kubernetes.io/projected/c3398088-3491-4364-9b22-9f2f69527826-kube-api-access-qzldk\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.195597 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-logs\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.199846 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.200592 4748 generic.go:334] "Generic (PLEG): container finished" podID="9d2cfbc5-6234-4d16-b3e8-db42c311a912" containerID="a4c9b72ee65d39f7db301dbff64e9fd4e9206deaaf8b4e434fe5e9f45410b33e" exitCode=0 Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.200661 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" event={"ID":"9d2cfbc5-6234-4d16-b3e8-db42c311a912","Type":"ContainerDied","Data":"a4c9b72ee65d39f7db301dbff64e9fd4e9206deaaf8b4e434fe5e9f45410b33e"} Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.201413 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.201447 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d294a5b1f89c6c34c7d61f2a887a4dff0b1fa943cd0603a1d259c16f2f816998/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.201799 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.205841 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-config-data\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.207570 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-scripts\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.219579 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.232645 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzgwf\" (UniqueName: \"kubernetes.io/projected/809f2a37-8f13-402b-ac98-8ea753a32978-kube-api-access-tzgwf\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.237820 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=16.237800477 podStartE2EDuration="16.237800477s" podCreationTimestamp="2026-02-16 15:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:30.23098462 +0000 UTC m=+1235.922653659" watchObservedRunningTime="2026-02-16 15:13:30.237800477 +0000 UTC m=+1235.929469516" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.301624 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.303393 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.303428 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.303538 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.303557 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzldk\" (UniqueName: \"kubernetes.io/projected/c3398088-3491-4364-9b22-9f2f69527826-kube-api-access-qzldk\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.303593 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.303974 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.304025 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.309670 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-logs\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.314227 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.316092 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.330225 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.331116 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.331145 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f58f00d51b81be0c55943aba0909dac7acd0e6134cb135d989bed8b6a75cb071/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.341175 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.348571 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzldk\" (UniqueName: \"kubernetes.io/projected/c3398088-3491-4364-9b22-9f2f69527826-kube-api-access-qzldk\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.395237 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.420631 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.430208 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.531347 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.531643 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.583906 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:30 crc kubenswrapper[4748]: I0216 15:13:30.657936 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.014893 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6797935-76f7-42dd-9f6d-63f6f63ff9bf" path="/var/lib/kubelet/pods/d6797935-76f7-42dd-9f6d-63f6f63ff9bf/volumes" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.036668 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jj48w"] Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.110094 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-8skr8"] Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.132166 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7dqg8"] Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.265947 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" event={"ID":"2c4a81e3-35af-43e8-aef9-04c03c01bba1","Type":"ContainerStarted","Data":"b687ba563b76d624a9623cc04b65167383ef9e4a47145e63363dae7d510365d8"} Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.286299 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-8skr8" event={"ID":"67fe68e5-f0bc-406d-8880-9c39649848de","Type":"ContainerStarted","Data":"6bf5c49af744e9f185e4dd207814c73a3f3ba9696242414e6ca72cfdc7b03dc8"} Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.287384 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.296804 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jj48w" event={"ID":"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf","Type":"ContainerStarted","Data":"20cd604597330223a59ed5803e90419ecf6115f35c6c1b0aebaf33d3da036451"} Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.303003 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w4qpk" event={"ID":"2edd4f60-d486-4690-84a1-768479ff2749","Type":"ContainerStarted","Data":"d68ad46b87e23d3c1eff5c4239a24666e3ca74d9b7f5136f744160546adb6788"} Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.303045 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w4qpk" event={"ID":"2edd4f60-d486-4690-84a1-768479ff2749","Type":"ContainerStarted","Data":"b072d4b5b48d2cee616d8a5dc7f7a9cee9546e5ce384cd5c624b0f7ca50f8246"} Feb 16 15:13:31 crc kubenswrapper[4748]: E0216 15:13:31.308804 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:13:31 crc kubenswrapper[4748]: E0216 15:13:31.308860 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:13:31 crc kubenswrapper[4748]: E0216 15:13:31.309001 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:13:31 crc kubenswrapper[4748]: E0216 15:13:31.311118 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.311290 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.422539 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.438987 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-w4qpk" podStartSLOduration=3.438964893 podStartE2EDuration="3.438964893s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:31.427226335 +0000 UTC m=+1237.118895374" watchObservedRunningTime="2026-02-16 15:13:31.438964893 +0000 UTC m=+1237.130633932" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.467932 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-sb\") pod \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.468007 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-nb\") pod \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.468112 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-config\") pod \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.468296 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-swift-storage-0\") pod \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.468353 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-svc\") pod \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.468432 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94ln\" (UniqueName: \"kubernetes.io/projected/9d2cfbc5-6234-4d16-b3e8-db42c311a912-kube-api-access-c94ln\") pod \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\" (UID: \"9d2cfbc5-6234-4d16-b3e8-db42c311a912\") " Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.561777 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7tc2j"] Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.587866 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-5n8n8"] Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.627274 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d2cfbc5-6234-4d16-b3e8-db42c311a912-kube-api-access-c94ln" (OuterVolumeSpecName: "kube-api-access-c94ln") pod "9d2cfbc5-6234-4d16-b3e8-db42c311a912" (UID: "9d2cfbc5-6234-4d16-b3e8-db42c311a912"). InnerVolumeSpecName "kube-api-access-c94ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.649510 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d2cfbc5-6234-4d16-b3e8-db42c311a912" (UID: "9d2cfbc5-6234-4d16-b3e8-db42c311a912"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.654607 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5jqr7"] Feb 16 15:13:31 crc kubenswrapper[4748]: W0216 15:13:31.654862 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59c88a82_b5c7_43aa_b216_2fe7bcc6dd71.slice/crio-eafa8fc9f7393633f46a1fb4d1508c80f2df669459430b255a4ce1ad944ae1e5 WatchSource:0}: Error finding container eafa8fc9f7393633f46a1fb4d1508c80f2df669459430b255a4ce1ad944ae1e5: Status 404 returned error can't find the container with id eafa8fc9f7393633f46a1fb4d1508c80f2df669459430b255a4ce1ad944ae1e5 Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.672419 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.672454 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94ln\" (UniqueName: \"kubernetes.io/projected/9d2cfbc5-6234-4d16-b3e8-db42c311a912-kube-api-access-c94ln\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.840598 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d2cfbc5-6234-4d16-b3e8-db42c311a912" (UID: "9d2cfbc5-6234-4d16-b3e8-db42c311a912"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.841374 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d2cfbc5-6234-4d16-b3e8-db42c311a912" (UID: "9d2cfbc5-6234-4d16-b3e8-db42c311a912"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.886783 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-config" (OuterVolumeSpecName: "config") pod "9d2cfbc5-6234-4d16-b3e8-db42c311a912" (UID: "9d2cfbc5-6234-4d16-b3e8-db42c311a912"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.906853 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9d2cfbc5-6234-4d16-b3e8-db42c311a912" (UID: "9d2cfbc5-6234-4d16-b3e8-db42c311a912"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.927229 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.927497 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.927507 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:31 crc kubenswrapper[4748]: I0216 15:13:31.927516 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d2cfbc5-6234-4d16-b3e8-db42c311a912-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.027648 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.110680 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mll8s"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.126651 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.160977 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.248114 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.354030 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" event={"ID":"f13e2e6e-7e41-4da9-868f-94d41efe273e","Type":"ContainerStarted","Data":"7fdf2c7a39f825df6235b0fdd71e0ebc34346e4dd464f553c1a133e3940deccb"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.355476 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.392464 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.393754 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-d2n8b" event={"ID":"9d2cfbc5-6234-4d16-b3e8-db42c311a912","Type":"ContainerDied","Data":"188a88a662b0f753c7cf5fe729196f04211416abe2cf496e7373968aec014dc3"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.393838 4748 scope.go:117] "RemoveContainer" containerID="a4c9b72ee65d39f7db301dbff64e9fd4e9206deaaf8b4e434fe5e9f45410b33e" Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.431930 4748 generic.go:334] "Generic (PLEG): container finished" podID="2c4a81e3-35af-43e8-aef9-04c03c01bba1" containerID="d361213b3fc7911eb69bef853cdf9b5c89c18757e17b0bd7f6e04a0f9731d6e7" exitCode=0 Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.432015 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" event={"ID":"2c4a81e3-35af-43e8-aef9-04c03c01bba1","Type":"ContainerDied","Data":"d361213b3fc7911eb69bef853cdf9b5c89c18757e17b0bd7f6e04a0f9731d6e7"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.496874 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7tc2j" event={"ID":"011c3199-3e59-4794-ab44-de1abe4675a0","Type":"ContainerStarted","Data":"b8a709251d8921937ada091214ec685b1d4f1ecc5c0ef625514b5e9b9655a05e"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.553568 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d2n8b"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.563354 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-d2n8b"] Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.581815 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5n8n8" event={"ID":"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71","Type":"ContainerStarted","Data":"eafa8fc9f7393633f46a1fb4d1508c80f2df669459430b255a4ce1ad944ae1e5"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.623269 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5jqr7" event={"ID":"1b3dfe98-47ed-4b69-b461-f0f9185e4697","Type":"ContainerStarted","Data":"c863e54f394bd19f82129dede35095c921bb6e6547a301e68212d735b08bcb3e"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.656192 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerStarted","Data":"f3fcb121671fb80441b3919ccfd06f01ae600a189b7bb37963c0c4b842bd442f"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.663006 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jj48w" event={"ID":"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf","Type":"ContainerStarted","Data":"07859cdc596123ee747df27a7e91f2c6e59a6afcd022065c5b2acb0d467172de"} Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.695105 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"809f2a37-8f13-402b-ac98-8ea753a32978","Type":"ContainerStarted","Data":"fb79c7ba2cc42be9a880cfc601c0eaa04f2d12c40cb83387581bdbadbed08117"} Feb 16 15:13:32 crc kubenswrapper[4748]: E0216 15:13:32.699328 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.705025 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jj48w" podStartSLOduration=4.704860205 podStartE2EDuration="4.704860205s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:32.685974122 +0000 UTC m=+1238.377643171" watchObservedRunningTime="2026-02-16 15:13:32.704860205 +0000 UTC m=+1238.396529244" Feb 16 15:13:32 crc kubenswrapper[4748]: I0216 15:13:32.978361 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.022462 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d2cfbc5-6234-4d16-b3e8-db42c311a912" path="/var/lib/kubelet/pods/9d2cfbc5-6234-4d16-b3e8-db42c311a912/volumes" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.060328 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-config\") pod \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.060395 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-swift-storage-0\") pod \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.060423 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-nb\") pod \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.060478 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-sb\") pod \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.060497 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-svc\") pod \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.060531 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-958ml\" (UniqueName: \"kubernetes.io/projected/2c4a81e3-35af-43e8-aef9-04c03c01bba1-kube-api-access-958ml\") pod \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\" (UID: \"2c4a81e3-35af-43e8-aef9-04c03c01bba1\") " Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.070828 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c4a81e3-35af-43e8-aef9-04c03c01bba1-kube-api-access-958ml" (OuterVolumeSpecName: "kube-api-access-958ml") pod "2c4a81e3-35af-43e8-aef9-04c03c01bba1" (UID: "2c4a81e3-35af-43e8-aef9-04c03c01bba1"). InnerVolumeSpecName "kube-api-access-958ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.115172 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c4a81e3-35af-43e8-aef9-04c03c01bba1" (UID: "2c4a81e3-35af-43e8-aef9-04c03c01bba1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.121521 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-config" (OuterVolumeSpecName: "config") pod "2c4a81e3-35af-43e8-aef9-04c03c01bba1" (UID: "2c4a81e3-35af-43e8-aef9-04c03c01bba1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.123143 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c4a81e3-35af-43e8-aef9-04c03c01bba1" (UID: "2c4a81e3-35af-43e8-aef9-04c03c01bba1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.128891 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c4a81e3-35af-43e8-aef9-04c03c01bba1" (UID: "2c4a81e3-35af-43e8-aef9-04c03c01bba1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.166706 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.166756 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.166767 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.166776 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.166784 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-958ml\" (UniqueName: \"kubernetes.io/projected/2c4a81e3-35af-43e8-aef9-04c03c01bba1-kube-api-access-958ml\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.167388 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c4a81e3-35af-43e8-aef9-04c03c01bba1" (UID: "2c4a81e3-35af-43e8-aef9-04c03c01bba1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.268512 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c4a81e3-35af-43e8-aef9-04c03c01bba1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.729665 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" event={"ID":"2c4a81e3-35af-43e8-aef9-04c03c01bba1","Type":"ContainerDied","Data":"b687ba563b76d624a9623cc04b65167383ef9e4a47145e63363dae7d510365d8"} Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.729959 4748 scope.go:117] "RemoveContainer" containerID="d361213b3fc7911eb69bef853cdf9b5c89c18757e17b0bd7f6e04a0f9731d6e7" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.730071 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-7dqg8" Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.762201 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"809f2a37-8f13-402b-ac98-8ea753a32978","Type":"ContainerStarted","Data":"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4"} Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.774853 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3398088-3491-4364-9b22-9f2f69527826","Type":"ContainerStarted","Data":"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084"} Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.774896 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3398088-3491-4364-9b22-9f2f69527826","Type":"ContainerStarted","Data":"10d8997eea27a21dd733713f44c19208ad1de3bc9a246fdac6793d339ed2c4d7"} Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.802352 4748 generic.go:334] "Generic (PLEG): container finished" podID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerID="cfffec18996a9d562ea98cc29208cd14603b33febe02c823a876fd5940685ed7" exitCode=0 Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.803798 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" event={"ID":"f13e2e6e-7e41-4da9-868f-94d41efe273e","Type":"ContainerDied","Data":"cfffec18996a9d562ea98cc29208cd14603b33febe02c823a876fd5940685ed7"} Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.921679 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7dqg8"] Feb 16 15:13:33 crc kubenswrapper[4748]: I0216 15:13:33.935673 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-7dqg8"] Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.728945 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.729292 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.729337 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.730146 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4af7b39f84ae1089f7c8b9340185a28b394a9429bf33b6cefed9e396e13808b9"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.730204 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://4af7b39f84ae1089f7c8b9340185a28b394a9429bf33b6cefed9e396e13808b9" gracePeriod=600 Feb 16 15:13:34 crc kubenswrapper[4748]: E0216 15:13:34.817783 4748 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfafb0b41_fe7a_4d57_a714_4666580d6ae6.slice/crio-conmon-4af7b39f84ae1089f7c8b9340185a28b394a9429bf33b6cefed9e396e13808b9.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.821004 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"809f2a37-8f13-402b-ac98-8ea753a32978","Type":"ContainerStarted","Data":"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b"} Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.821138 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-log" containerID="cri-o://9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4" gracePeriod=30 Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.821234 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-httpd" containerID="cri-o://df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b" gracePeriod=30 Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.840172 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-log" containerID="cri-o://50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084" gracePeriod=30 Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.840503 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-httpd" containerID="cri-o://7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e" gracePeriod=30 Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.850664 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.850648051 podStartE2EDuration="6.850648051s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:34.84570724 +0000 UTC m=+1240.537376279" watchObservedRunningTime="2026-02-16 15:13:34.850648051 +0000 UTC m=+1240.542317090" Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.874396 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" event={"ID":"f13e2e6e-7e41-4da9-868f-94d41efe273e","Type":"ContainerStarted","Data":"0c657f3cbcd4db7170a8d6bcc0f391a5d4d58edd72d4401850feded56998a6f0"} Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.875555 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.912298 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" podStartSLOduration=5.912275921 podStartE2EDuration="5.912275921s" podCreationTimestamp="2026-02-16 15:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:34.905836113 +0000 UTC m=+1240.597505152" watchObservedRunningTime="2026-02-16 15:13:34.912275921 +0000 UTC m=+1240.603944960" Feb 16 15:13:34 crc kubenswrapper[4748]: I0216 15:13:34.925965 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.925942455 podStartE2EDuration="6.925942455s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:34.883079355 +0000 UTC m=+1240.574748394" watchObservedRunningTime="2026-02-16 15:13:34.925942455 +0000 UTC m=+1240.617611494" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.009950 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c4a81e3-35af-43e8-aef9-04c03c01bba1" path="/var/lib/kubelet/pods/2c4a81e3-35af-43e8-aef9-04c03c01bba1/volumes" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.607001 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729423 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-httpd-run\") pod \"809f2a37-8f13-402b-ac98-8ea753a32978\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729631 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"809f2a37-8f13-402b-ac98-8ea753a32978\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729657 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-combined-ca-bundle\") pod \"809f2a37-8f13-402b-ac98-8ea753a32978\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729689 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-logs\") pod \"809f2a37-8f13-402b-ac98-8ea753a32978\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729735 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzgwf\" (UniqueName: \"kubernetes.io/projected/809f2a37-8f13-402b-ac98-8ea753a32978-kube-api-access-tzgwf\") pod \"809f2a37-8f13-402b-ac98-8ea753a32978\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729824 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-config-data\") pod \"809f2a37-8f13-402b-ac98-8ea753a32978\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729826 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "809f2a37-8f13-402b-ac98-8ea753a32978" (UID: "809f2a37-8f13-402b-ac98-8ea753a32978"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.729946 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-scripts\") pod \"809f2a37-8f13-402b-ac98-8ea753a32978\" (UID: \"809f2a37-8f13-402b-ac98-8ea753a32978\") " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.730009 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-logs" (OuterVolumeSpecName: "logs") pod "809f2a37-8f13-402b-ac98-8ea753a32978" (UID: "809f2a37-8f13-402b-ac98-8ea753a32978"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.730389 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.730400 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/809f2a37-8f13-402b-ac98-8ea753a32978-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.740475 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-scripts" (OuterVolumeSpecName: "scripts") pod "809f2a37-8f13-402b-ac98-8ea753a32978" (UID: "809f2a37-8f13-402b-ac98-8ea753a32978"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.770453 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc" (OuterVolumeSpecName: "glance") pod "809f2a37-8f13-402b-ac98-8ea753a32978" (UID: "809f2a37-8f13-402b-ac98-8ea753a32978"). InnerVolumeSpecName "pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.799734 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/809f2a37-8f13-402b-ac98-8ea753a32978-kube-api-access-tzgwf" (OuterVolumeSpecName: "kube-api-access-tzgwf") pod "809f2a37-8f13-402b-ac98-8ea753a32978" (UID: "809f2a37-8f13-402b-ac98-8ea753a32978"). InnerVolumeSpecName "kube-api-access-tzgwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.832355 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.832392 4748 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") on node \"crc\" " Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.832404 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzgwf\" (UniqueName: \"kubernetes.io/projected/809f2a37-8f13-402b-ac98-8ea753a32978-kube-api-access-tzgwf\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.860197 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-config-data" (OuterVolumeSpecName: "config-data") pod "809f2a37-8f13-402b-ac98-8ea753a32978" (UID: "809f2a37-8f13-402b-ac98-8ea753a32978"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.875843 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "809f2a37-8f13-402b-ac98-8ea753a32978" (UID: "809f2a37-8f13-402b-ac98-8ea753a32978"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.898379 4748 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.900597 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.901695 4748 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc") on node "crc" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.914934 4748 generic.go:334] "Generic (PLEG): container finished" podID="c3398088-3491-4364-9b22-9f2f69527826" containerID="7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e" exitCode=143 Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.914993 4748 generic.go:334] "Generic (PLEG): container finished" podID="c3398088-3491-4364-9b22-9f2f69527826" containerID="50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084" exitCode=143 Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.915036 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3398088-3491-4364-9b22-9f2f69527826","Type":"ContainerDied","Data":"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.915085 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3398088-3491-4364-9b22-9f2f69527826","Type":"ContainerDied","Data":"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.915103 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c3398088-3491-4364-9b22-9f2f69527826","Type":"ContainerDied","Data":"10d8997eea27a21dd733713f44c19208ad1de3bc9a246fdac6793d339ed2c4d7"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.915137 4748 scope.go:117] "RemoveContainer" containerID="7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.915299 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.934277 4748 reconciler_common.go:293] "Volume detached for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.934306 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.934317 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/809f2a37-8f13-402b-ac98-8ea753a32978-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.934278 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="4af7b39f84ae1089f7c8b9340185a28b394a9429bf33b6cefed9e396e13808b9" exitCode=0 Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.934310 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"4af7b39f84ae1089f7c8b9340185a28b394a9429bf33b6cefed9e396e13808b9"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.934373 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"f07fa92f05df6cf447a32d8da407b2cd8f537fec4af797ea35626d400c475838"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.939357 4748 generic.go:334] "Generic (PLEG): container finished" podID="809f2a37-8f13-402b-ac98-8ea753a32978" containerID="df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b" exitCode=143 Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.939399 4748 generic.go:334] "Generic (PLEG): container finished" podID="809f2a37-8f13-402b-ac98-8ea753a32978" containerID="9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4" exitCode=143 Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.939583 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"809f2a37-8f13-402b-ac98-8ea753a32978","Type":"ContainerDied","Data":"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.939662 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"809f2a37-8f13-402b-ac98-8ea753a32978","Type":"ContainerDied","Data":"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.939681 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"809f2a37-8f13-402b-ac98-8ea753a32978","Type":"ContainerDied","Data":"fb79c7ba2cc42be9a880cfc601c0eaa04f2d12c40cb83387581bdbadbed08117"} Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.940141 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:35 crc kubenswrapper[4748]: I0216 15:13:35.998159 4748 scope.go:117] "RemoveContainer" containerID="50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.035619 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-config-data\") pod \"c3398088-3491-4364-9b22-9f2f69527826\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.035666 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-httpd-run\") pod \"c3398088-3491-4364-9b22-9f2f69527826\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.035782 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzldk\" (UniqueName: \"kubernetes.io/projected/c3398088-3491-4364-9b22-9f2f69527826-kube-api-access-qzldk\") pod \"c3398088-3491-4364-9b22-9f2f69527826\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.035841 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-logs\") pod \"c3398088-3491-4364-9b22-9f2f69527826\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.035975 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-combined-ca-bundle\") pod \"c3398088-3491-4364-9b22-9f2f69527826\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.036016 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-scripts\") pod \"c3398088-3491-4364-9b22-9f2f69527826\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.036147 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"c3398088-3491-4364-9b22-9f2f69527826\" (UID: \"c3398088-3491-4364-9b22-9f2f69527826\") " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.039181 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c3398088-3491-4364-9b22-9f2f69527826" (UID: "c3398088-3491-4364-9b22-9f2f69527826"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.043199 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-logs" (OuterVolumeSpecName: "logs") pod "c3398088-3491-4364-9b22-9f2f69527826" (UID: "c3398088-3491-4364-9b22-9f2f69527826"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.060224 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.068911 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3398088-3491-4364-9b22-9f2f69527826-kube-api-access-qzldk" (OuterVolumeSpecName: "kube-api-access-qzldk") pod "c3398088-3491-4364-9b22-9f2f69527826" (UID: "c3398088-3491-4364-9b22-9f2f69527826"). InnerVolumeSpecName "kube-api-access-qzldk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.100469 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.103392 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-scripts" (OuterVolumeSpecName: "scripts") pod "c3398088-3491-4364-9b22-9f2f69527826" (UID: "c3398088-3491-4364-9b22-9f2f69527826"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.137509 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.138061 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-httpd" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138077 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-httpd" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.138097 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c4a81e3-35af-43e8-aef9-04c03c01bba1" containerName="init" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138106 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c4a81e3-35af-43e8-aef9-04c03c01bba1" containerName="init" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.138123 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-log" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138132 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-log" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.138154 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-log" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138163 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-log" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.138177 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-httpd" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138185 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-httpd" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.138224 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d2cfbc5-6234-4d16-b3e8-db42c311a912" containerName="init" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138233 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d2cfbc5-6234-4d16-b3e8-db42c311a912" containerName="init" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138397 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d2cfbc5-6234-4d16-b3e8-db42c311a912" containerName="init" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138415 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-log" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138424 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" containerName="glance-httpd" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138437 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-httpd" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138449 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3398088-3491-4364-9b22-9f2f69527826" containerName="glance-log" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.138459 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c4a81e3-35af-43e8-aef9-04c03c01bba1" containerName="init" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.139448 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.141300 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzldk\" (UniqueName: \"kubernetes.io/projected/c3398088-3491-4364-9b22-9f2f69527826-kube-api-access-qzldk\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.141337 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.141346 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.141354 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c3398088-3491-4364-9b22-9f2f69527826-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.144343 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.151355 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.158540 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-config-data" (OuterVolumeSpecName: "config-data") pod "c3398088-3491-4364-9b22-9f2f69527826" (UID: "c3398088-3491-4364-9b22-9f2f69527826"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.159974 4748 scope.go:117] "RemoveContainer" containerID="7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.163379 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3398088-3491-4364-9b22-9f2f69527826" (UID: "c3398088-3491-4364-9b22-9f2f69527826"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.164890 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e\": container with ID starting with 7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e not found: ID does not exist" containerID="7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.164923 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e"} err="failed to get container status \"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e\": rpc error: code = NotFound desc = could not find container \"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e\": container with ID starting with 7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.164945 4748 scope.go:117] "RemoveContainer" containerID="50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.176315 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084\": container with ID starting with 50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084 not found: ID does not exist" containerID="50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.176348 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084"} err="failed to get container status \"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084\": rpc error: code = NotFound desc = could not find container \"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084\": container with ID starting with 50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084 not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.176370 4748 scope.go:117] "RemoveContainer" containerID="7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.177484 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e"} err="failed to get container status \"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e\": rpc error: code = NotFound desc = could not find container \"7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e\": container with ID starting with 7c16ee459639e8ba6ed931ddacbafe9b907e79cc2fcd561663835ad4fe18925e not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.177511 4748 scope.go:117] "RemoveContainer" containerID="50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.194692 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084"} err="failed to get container status \"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084\": rpc error: code = NotFound desc = could not find container \"50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084\": container with ID starting with 50f2bac2bcad51b3bce13ed6c118be02b519806f64f42b17962d18c30e8c7084 not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.194745 4748 scope.go:117] "RemoveContainer" containerID="67a90ffaffa57c523d924f86672f533c001cdd9525b4878908f13274a4bee682" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244078 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244132 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9w8r\" (UniqueName: \"kubernetes.io/projected/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-kube-api-access-g9w8r\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244212 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244240 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244312 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-logs\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244333 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244355 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244420 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.244431 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3398088-3491-4364-9b22-9f2f69527826-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.329550 4748 scope.go:117] "RemoveContainer" containerID="df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.346502 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.346563 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.346662 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-logs\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.346696 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.346798 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.346847 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.346876 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9w8r\" (UniqueName: \"kubernetes.io/projected/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-kube-api-access-g9w8r\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.349072 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-logs\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.349246 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.353366 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-scripts\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.360195 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-config-data\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.364922 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.370024 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f" (OuterVolumeSpecName: "glance") pod "c3398088-3491-4364-9b22-9f2f69527826" (UID: "c3398088-3491-4364-9b22-9f2f69527826"). InnerVolumeSpecName "pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.374869 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.374917 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d294a5b1f89c6c34c7d61f2a887a4dff0b1fa943cd0603a1d259c16f2f816998/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.388533 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9w8r\" (UniqueName: \"kubernetes.io/projected/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-kube-api-access-g9w8r\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.451958 4748 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") on node \"crc\" " Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.491006 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.492244 4748 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.492397 4748 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f") on node "crc" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.510958 4748 scope.go:117] "RemoveContainer" containerID="9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.554477 4748 reconciler_common.go:293] "Volume detached for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.554575 4748 scope.go:117] "RemoveContainer" containerID="df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.554983 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b\": container with ID starting with df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b not found: ID does not exist" containerID="df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.555034 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b"} err="failed to get container status \"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b\": rpc error: code = NotFound desc = could not find container \"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b\": container with ID starting with df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.555098 4748 scope.go:117] "RemoveContainer" containerID="9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4" Feb 16 15:13:36 crc kubenswrapper[4748]: E0216 15:13:36.557977 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4\": container with ID starting with 9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4 not found: ID does not exist" containerID="9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.558001 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4"} err="failed to get container status \"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4\": rpc error: code = NotFound desc = could not find container \"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4\": container with ID starting with 9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4 not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.558015 4748 scope.go:117] "RemoveContainer" containerID="df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.558084 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.559218 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b"} err="failed to get container status \"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b\": rpc error: code = NotFound desc = could not find container \"df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b\": container with ID starting with df416d2bc0cbb35337c942dc880b7934358fb64b047967a581f6a0f4d41fbe5b not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.559241 4748 scope.go:117] "RemoveContainer" containerID="9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.564660 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4"} err="failed to get container status \"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4\": rpc error: code = NotFound desc = could not find container \"9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4\": container with ID starting with 9d5b09d78d248f99991c1dad7e6277158185d5e17a2f55cd1e12c5950c26b6c4 not found: ID does not exist" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.566418 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.596206 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.605145 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.612011 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.612822 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.660858 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.660949 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.661190 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfxsz\" (UniqueName: \"kubernetes.io/projected/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-kube-api-access-lfxsz\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.661273 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.661329 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.663646 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.663733 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.765348 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.765395 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfxsz\" (UniqueName: \"kubernetes.io/projected/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-kube-api-access-lfxsz\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.765442 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.765500 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.765602 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.765660 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.765760 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.766359 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.767124 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-logs\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.768878 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.768908 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f58f00d51b81be0c55943aba0909dac7acd0e6134cb135d989bed8b6a75cb071/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.770488 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.772437 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.773018 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.787469 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfxsz\" (UniqueName: \"kubernetes.io/projected/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-kube-api-access-lfxsz\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.788985 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.845253 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.942629 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.950231 4748 generic.go:334] "Generic (PLEG): container finished" podID="2edd4f60-d486-4690-84a1-768479ff2749" containerID="d68ad46b87e23d3c1eff5c4239a24666e3ca74d9b7f5136f744160546adb6788" exitCode=0 Feb 16 15:13:36 crc kubenswrapper[4748]: I0216 15:13:36.950336 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w4qpk" event={"ID":"2edd4f60-d486-4690-84a1-768479ff2749","Type":"ContainerDied","Data":"d68ad46b87e23d3c1eff5c4239a24666e3ca74d9b7f5136f744160546adb6788"} Feb 16 15:13:37 crc kubenswrapper[4748]: I0216 15:13:37.012096 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="809f2a37-8f13-402b-ac98-8ea753a32978" path="/var/lib/kubelet/pods/809f2a37-8f13-402b-ac98-8ea753a32978/volumes" Feb 16 15:13:37 crc kubenswrapper[4748]: I0216 15:13:37.013579 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3398088-3491-4364-9b22-9f2f69527826" path="/var/lib/kubelet/pods/c3398088-3491-4364-9b22-9f2f69527826/volumes" Feb 16 15:13:38 crc kubenswrapper[4748]: I0216 15:13:38.395777 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:38 crc kubenswrapper[4748]: I0216 15:13:38.458402 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:40 crc kubenswrapper[4748]: I0216 15:13:40.432968 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:13:40 crc kubenswrapper[4748]: I0216 15:13:40.503572 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9np95"] Feb 16 15:13:40 crc kubenswrapper[4748]: I0216 15:13:40.503856 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-9np95" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="dnsmasq-dns" containerID="cri-o://7ef87fe3e16da4a50f9c37091c0df5170bdbf7fa4410abe9506a493923366301" gracePeriod=10 Feb 16 15:13:41 crc kubenswrapper[4748]: I0216 15:13:41.028919 4748 generic.go:334] "Generic (PLEG): container finished" podID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerID="7ef87fe3e16da4a50f9c37091c0df5170bdbf7fa4410abe9506a493923366301" exitCode=0 Feb 16 15:13:41 crc kubenswrapper[4748]: I0216 15:13:41.029239 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9np95" event={"ID":"37fffb84-58d3-4922-b28c-d85aa6986ce7","Type":"ContainerDied","Data":"7ef87fe3e16da4a50f9c37091c0df5170bdbf7fa4410abe9506a493923366301"} Feb 16 15:13:41 crc kubenswrapper[4748]: I0216 15:13:41.829457 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9np95" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: connect: connection refused" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.621272 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.688291 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle\") pod \"2edd4f60-d486-4690-84a1-768479ff2749\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.688447 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-scripts\") pod \"2edd4f60-d486-4690-84a1-768479ff2749\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.688491 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-config-data\") pod \"2edd4f60-d486-4690-84a1-768479ff2749\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.688551 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-credential-keys\") pod \"2edd4f60-d486-4690-84a1-768479ff2749\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.688573 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-682kn\" (UniqueName: \"kubernetes.io/projected/2edd4f60-d486-4690-84a1-768479ff2749-kube-api-access-682kn\") pod \"2edd4f60-d486-4690-84a1-768479ff2749\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.688605 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-fernet-keys\") pod \"2edd4f60-d486-4690-84a1-768479ff2749\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.696272 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "2edd4f60-d486-4690-84a1-768479ff2749" (UID: "2edd4f60-d486-4690-84a1-768479ff2749"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.697022 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "2edd4f60-d486-4690-84a1-768479ff2749" (UID: "2edd4f60-d486-4690-84a1-768479ff2749"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.719508 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2edd4f60-d486-4690-84a1-768479ff2749-kube-api-access-682kn" (OuterVolumeSpecName: "kube-api-access-682kn") pod "2edd4f60-d486-4690-84a1-768479ff2749" (UID: "2edd4f60-d486-4690-84a1-768479ff2749"). InnerVolumeSpecName "kube-api-access-682kn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.721581 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-scripts" (OuterVolumeSpecName: "scripts") pod "2edd4f60-d486-4690-84a1-768479ff2749" (UID: "2edd4f60-d486-4690-84a1-768479ff2749"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:45 crc kubenswrapper[4748]: E0216 15:13:45.731155 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle podName:2edd4f60-d486-4690-84a1-768479ff2749 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:46.231123727 +0000 UTC m=+1251.922792766 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "combined-ca-bundle" (UniqueName: "kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle") pod "2edd4f60-d486-4690-84a1-768479ff2749" (UID: "2edd4f60-d486-4690-84a1-768479ff2749") : error deleting /var/lib/kubelet/pods/2edd4f60-d486-4690-84a1-768479ff2749/volume-subpaths: remove /var/lib/kubelet/pods/2edd4f60-d486-4690-84a1-768479ff2749/volume-subpaths: no such file or directory Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.735009 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-config-data" (OuterVolumeSpecName: "config-data") pod "2edd4f60-d486-4690-84a1-768479ff2749" (UID: "2edd4f60-d486-4690-84a1-768479ff2749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.790408 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.790440 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.790450 4748 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.790461 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-682kn\" (UniqueName: \"kubernetes.io/projected/2edd4f60-d486-4690-84a1-768479ff2749-kube-api-access-682kn\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:45 crc kubenswrapper[4748]: I0216 15:13:45.790469 4748 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.084663 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-w4qpk" event={"ID":"2edd4f60-d486-4690-84a1-768479ff2749","Type":"ContainerDied","Data":"b072d4b5b48d2cee616d8a5dc7f7a9cee9546e5ce384cd5c624b0f7ca50f8246"} Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.085052 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b072d4b5b48d2cee616d8a5dc7f7a9cee9546e5ce384cd5c624b0f7ca50f8246" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.084781 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-w4qpk" Feb 16 15:13:46 crc kubenswrapper[4748]: E0216 15:13:46.126993 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:13:46 crc kubenswrapper[4748]: E0216 15:13:46.127063 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:13:46 crc kubenswrapper[4748]: E0216 15:13:46.127415 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:13:46 crc kubenswrapper[4748]: E0216 15:13:46.128853 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.301322 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle\") pod \"2edd4f60-d486-4690-84a1-768479ff2749\" (UID: \"2edd4f60-d486-4690-84a1-768479ff2749\") " Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.305563 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2edd4f60-d486-4690-84a1-768479ff2749" (UID: "2edd4f60-d486-4690-84a1-768479ff2749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.402996 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2edd4f60-d486-4690-84a1-768479ff2749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.737828 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-w4qpk"] Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.745636 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-w4qpk"] Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.838020 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-msqwg"] Feb 16 15:13:46 crc kubenswrapper[4748]: E0216 15:13:46.838964 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2edd4f60-d486-4690-84a1-768479ff2749" containerName="keystone-bootstrap" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.838993 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2edd4f60-d486-4690-84a1-768479ff2749" containerName="keystone-bootstrap" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.839257 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2edd4f60-d486-4690-84a1-768479ff2749" containerName="keystone-bootstrap" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.840769 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.843010 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.843499 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.843694 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cd9jb" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.843865 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.849064 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-msqwg"] Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.916078 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-scripts\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.916184 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-credential-keys\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.916304 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-combined-ca-bundle\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.916428 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-config-data\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.916555 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2lf\" (UniqueName: \"kubernetes.io/projected/7be12071-c948-4de3-8d3e-d21df02dfa91-kube-api-access-7s2lf\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:46 crc kubenswrapper[4748]: I0216 15:13:46.916681 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-fernet-keys\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.008769 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2edd4f60-d486-4690-84a1-768479ff2749" path="/var/lib/kubelet/pods/2edd4f60-d486-4690-84a1-768479ff2749/volumes" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.018524 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-scripts\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.018598 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-credential-keys\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.018815 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-combined-ca-bundle\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.019927 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-config-data\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.020077 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2lf\" (UniqueName: \"kubernetes.io/projected/7be12071-c948-4de3-8d3e-d21df02dfa91-kube-api-access-7s2lf\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.020166 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-fernet-keys\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.025315 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-scripts\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.025591 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-combined-ca-bundle\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.026566 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-config-data\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.028388 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-fernet-keys\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.028732 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-credential-keys\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.045975 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2lf\" (UniqueName: \"kubernetes.io/projected/7be12071-c948-4de3-8d3e-d21df02dfa91-kube-api-access-7s2lf\") pod \"keystone-bootstrap-msqwg\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:47 crc kubenswrapper[4748]: I0216 15:13:47.169536 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:13:51 crc kubenswrapper[4748]: I0216 15:13:51.830656 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9np95" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.136917 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.189014 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-9np95" event={"ID":"37fffb84-58d3-4922-b28c-d85aa6986ce7","Type":"ContainerDied","Data":"0887990cea8f51996666db9286acba473f7435221d4026c87bfbc4dbdba5feb4"} Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.189039 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-9np95" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.189080 4748 scope.go:117] "RemoveContainer" containerID="7ef87fe3e16da4a50f9c37091c0df5170bdbf7fa4410abe9506a493923366301" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.192167 4748 generic.go:334] "Generic (PLEG): container finished" podID="d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" containerID="07859cdc596123ee747df27a7e91f2c6e59a6afcd022065c5b2acb0d467172de" exitCode=0 Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.192199 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jj48w" event={"ID":"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf","Type":"ContainerDied","Data":"07859cdc596123ee747df27a7e91f2c6e59a6afcd022065c5b2acb0d467172de"} Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.267134 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnvsg\" (UniqueName: \"kubernetes.io/projected/37fffb84-58d3-4922-b28c-d85aa6986ce7-kube-api-access-rnvsg\") pod \"37fffb84-58d3-4922-b28c-d85aa6986ce7\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.267201 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-dns-svc\") pod \"37fffb84-58d3-4922-b28c-d85aa6986ce7\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.267253 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-sb\") pod \"37fffb84-58d3-4922-b28c-d85aa6986ce7\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.267358 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config\") pod \"37fffb84-58d3-4922-b28c-d85aa6986ce7\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.267404 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-nb\") pod \"37fffb84-58d3-4922-b28c-d85aa6986ce7\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.287932 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fffb84-58d3-4922-b28c-d85aa6986ce7-kube-api-access-rnvsg" (OuterVolumeSpecName: "kube-api-access-rnvsg") pod "37fffb84-58d3-4922-b28c-d85aa6986ce7" (UID: "37fffb84-58d3-4922-b28c-d85aa6986ce7"). InnerVolumeSpecName "kube-api-access-rnvsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.319187 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37fffb84-58d3-4922-b28c-d85aa6986ce7" (UID: "37fffb84-58d3-4922-b28c-d85aa6986ce7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.327587 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37fffb84-58d3-4922-b28c-d85aa6986ce7" (UID: "37fffb84-58d3-4922-b28c-d85aa6986ce7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:55 crc kubenswrapper[4748]: E0216 15:13:55.334247 4748 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config podName:37fffb84-58d3-4922-b28c-d85aa6986ce7 nodeName:}" failed. No retries permitted until 2026-02-16 15:13:55.83421854 +0000 UTC m=+1261.525887579 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config") pod "37fffb84-58d3-4922-b28c-d85aa6986ce7" (UID: "37fffb84-58d3-4922-b28c-d85aa6986ce7") : error deleting /var/lib/kubelet/pods/37fffb84-58d3-4922-b28c-d85aa6986ce7/volume-subpaths: remove /var/lib/kubelet/pods/37fffb84-58d3-4922-b28c-d85aa6986ce7/volume-subpaths: no such file or directory Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.334622 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37fffb84-58d3-4922-b28c-d85aa6986ce7" (UID: "37fffb84-58d3-4922-b28c-d85aa6986ce7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.373373 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.373416 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnvsg\" (UniqueName: \"kubernetes.io/projected/37fffb84-58d3-4922-b28c-d85aa6986ce7-kube-api-access-rnvsg\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.373430 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.373443 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.880260 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config\") pod \"37fffb84-58d3-4922-b28c-d85aa6986ce7\" (UID: \"37fffb84-58d3-4922-b28c-d85aa6986ce7\") " Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.881518 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config" (OuterVolumeSpecName: "config") pod "37fffb84-58d3-4922-b28c-d85aa6986ce7" (UID: "37fffb84-58d3-4922-b28c-d85aa6986ce7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:13:55 crc kubenswrapper[4748]: I0216 15:13:55.983249 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37fffb84-58d3-4922-b28c-d85aa6986ce7-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.126838 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9np95"] Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.141649 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-9np95"] Feb 16 15:13:56 crc kubenswrapper[4748]: E0216 15:13:56.155799 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 16 15:13:56 crc kubenswrapper[4748]: E0216 15:13:56.155946 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d45dh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-5n8n8_openstack(59c88a82-b5c7-43aa-b216-2fe7bcc6dd71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 16 15:13:56 crc kubenswrapper[4748]: E0216 15:13:56.157531 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-5n8n8" podUID="59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.179217 4748 scope.go:117] "RemoveContainer" containerID="63b5932c9884725c35e5016afa75ced61693f2e5442063a5294ccf5a024a1239" Feb 16 15:13:56 crc kubenswrapper[4748]: E0216 15:13:56.207096 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-5n8n8" podUID="59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.693739 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.783592 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:13:56 crc kubenswrapper[4748]: W0216 15:13:56.786698 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbf2f897f_cf20_489b_aba1_7fbaff4c21a0.slice/crio-8089855d11b8c49116c07497869211b83f8b7cdda862a5cf616c32a971f7a182 WatchSource:0}: Error finding container 8089855d11b8c49116c07497869211b83f8b7cdda862a5cf616c32a971f7a182: Status 404 returned error can't find the container with id 8089855d11b8c49116c07497869211b83f8b7cdda862a5cf616c32a971f7a182 Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.798379 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xl478\" (UniqueName: \"kubernetes.io/projected/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-kube-api-access-xl478\") pod \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.798472 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-combined-ca-bundle\") pod \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.798551 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-config\") pod \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\" (UID: \"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf\") " Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.804924 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-kube-api-access-xl478" (OuterVolumeSpecName: "kube-api-access-xl478") pod "d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" (UID: "d248cf3a-4788-4b1b-9c6c-9fea87ed20cf"). InnerVolumeSpecName "kube-api-access-xl478". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.831850 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-9np95" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.130:5353: i/o timeout" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.836335 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-config" (OuterVolumeSpecName: "config") pod "d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" (UID: "d248cf3a-4788-4b1b-9c6c-9fea87ed20cf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.836840 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" (UID: "d248cf3a-4788-4b1b-9c6c-9fea87ed20cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.900153 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xl478\" (UniqueName: \"kubernetes.io/projected/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-kube-api-access-xl478\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.900188 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.900198 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:13:56 crc kubenswrapper[4748]: I0216 15:13:56.900314 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-msqwg"] Feb 16 15:13:56 crc kubenswrapper[4748]: W0216 15:13:56.905086 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be12071_c948_4de3_8d3e_d21df02dfa91.slice/crio-2a79122f3ed8c71786870ef43e63e5973c4ffe57d3f551dd57d773f0792a6f01 WatchSource:0}: Error finding container 2a79122f3ed8c71786870ef43e63e5973c4ffe57d3f551dd57d773f0792a6f01: Status 404 returned error can't find the container with id 2a79122f3ed8c71786870ef43e63e5973c4ffe57d3f551dd57d773f0792a6f01 Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.008893 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" path="/var/lib/kubelet/pods/37fffb84-58d3-4922-b28c-d85aa6986ce7/volumes" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.226577 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7tc2j" event={"ID":"011c3199-3e59-4794-ab44-de1abe4675a0","Type":"ContainerStarted","Data":"fed68d69696b1eb5f344860cd4c49a6f421ff1adc0c21ea6ecaa77c4ac1308da"} Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.250237 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-msqwg" event={"ID":"7be12071-c948-4de3-8d3e-d21df02dfa91","Type":"ContainerStarted","Data":"eb8a6b756e2112fc2bcaa43573594481ddbe1f7a26a74a3dc3ae3affcd7735dc"} Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.250291 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-msqwg" event={"ID":"7be12071-c948-4de3-8d3e-d21df02dfa91","Type":"ContainerStarted","Data":"2a79122f3ed8c71786870ef43e63e5973c4ffe57d3f551dd57d773f0792a6f01"} Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.254459 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7tc2j" podStartSLOduration=4.66837018 podStartE2EDuration="29.254443991s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="2026-02-16 15:13:31.593005866 +0000 UTC m=+1237.284674905" lastFinishedPulling="2026-02-16 15:13:56.179079677 +0000 UTC m=+1261.870748716" observedRunningTime="2026-02-16 15:13:57.246347393 +0000 UTC m=+1262.938016432" watchObservedRunningTime="2026-02-16 15:13:57.254443991 +0000 UTC m=+1262.946113030" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.255595 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5jqr7" event={"ID":"1b3dfe98-47ed-4b69-b461-f0f9185e4697","Type":"ContainerStarted","Data":"e479790bbaba4129511d3075304fad8e857f596d77df601c7464e9f4ff714ffe"} Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.259130 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jj48w" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.259762 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jj48w" event={"ID":"d248cf3a-4788-4b1b-9c6c-9fea87ed20cf","Type":"ContainerDied","Data":"20cd604597330223a59ed5803e90419ecf6115f35c6c1b0aebaf33d3da036451"} Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.259815 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20cd604597330223a59ed5803e90419ecf6115f35c6c1b0aebaf33d3da036451" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.270127 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf2f897f-cf20-489b-aba1-7fbaff4c21a0","Type":"ContainerStarted","Data":"8089855d11b8c49116c07497869211b83f8b7cdda862a5cf616c32a971f7a182"} Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.272677 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerStarted","Data":"543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828"} Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.275321 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-msqwg" podStartSLOduration=11.275302882 podStartE2EDuration="11.275302882s" podCreationTimestamp="2026-02-16 15:13:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:57.263498623 +0000 UTC m=+1262.955167662" watchObservedRunningTime="2026-02-16 15:13:57.275302882 +0000 UTC m=+1262.966971921" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.335748 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5jqr7" podStartSLOduration=3.943420959 podStartE2EDuration="28.335729333s" podCreationTimestamp="2026-02-16 15:13:29 +0000 UTC" firstStartedPulling="2026-02-16 15:13:31.710465574 +0000 UTC m=+1237.402134613" lastFinishedPulling="2026-02-16 15:13:56.102773958 +0000 UTC m=+1261.794442987" observedRunningTime="2026-02-16 15:13:57.325975144 +0000 UTC m=+1263.017644183" watchObservedRunningTime="2026-02-16 15:13:57.335729333 +0000 UTC m=+1263.027398372" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.440856 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hr4l4"] Feb 16 15:13:57 crc kubenswrapper[4748]: E0216 15:13:57.441848 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="dnsmasq-dns" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.441947 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="dnsmasq-dns" Feb 16 15:13:57 crc kubenswrapper[4748]: E0216 15:13:57.442069 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="init" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.442164 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="init" Feb 16 15:13:57 crc kubenswrapper[4748]: E0216 15:13:57.442247 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" containerName="neutron-db-sync" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.442325 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" containerName="neutron-db-sync" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.442659 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" containerName="neutron-db-sync" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.442792 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fffb84-58d3-4922-b28c-d85aa6986ce7" containerName="dnsmasq-dns" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.445131 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.512791 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hr4l4"] Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.583664 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-85b968f78-p2mst"] Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.585366 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.598039 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.598845 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.599029 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.599034 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-47qwf" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.623006 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brd72\" (UniqueName: \"kubernetes.io/projected/84c2e211-48cf-4dc8-8036-8dec6a355951-kube-api-access-brd72\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.623070 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.623108 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-config\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.623176 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.623244 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.623299 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.659385 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85b968f78-p2mst"] Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.673056 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726199 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4mg8\" (UniqueName: \"kubernetes.io/projected/5828c48d-11da-4512-9c8d-75a789082601-kube-api-access-g4mg8\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726251 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brd72\" (UniqueName: \"kubernetes.io/projected/84c2e211-48cf-4dc8-8036-8dec6a355951-kube-api-access-brd72\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726277 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726324 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-config\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726383 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726432 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-httpd-config\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726469 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726515 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726546 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-ovndb-tls-certs\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726585 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-combined-ca-bundle\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.726622 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-config\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.732578 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-svc\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.733358 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-config\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.736215 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.736519 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.737219 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.763482 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brd72\" (UniqueName: \"kubernetes.io/projected/84c2e211-48cf-4dc8-8036-8dec6a355951-kube-api-access-brd72\") pod \"dnsmasq-dns-55f844cf75-hr4l4\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.784830 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.830307 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-httpd-config\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.830381 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-ovndb-tls-certs\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.830414 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-combined-ca-bundle\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.830440 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-config\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.830480 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4mg8\" (UniqueName: \"kubernetes.io/projected/5828c48d-11da-4512-9c8d-75a789082601-kube-api-access-g4mg8\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.835976 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-httpd-config\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.836954 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-ovndb-tls-certs\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.841285 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-config\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.863630 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-combined-ca-bundle\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:57 crc kubenswrapper[4748]: I0216 15:13:57.866617 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4mg8\" (UniqueName: \"kubernetes.io/projected/5828c48d-11da-4512-9c8d-75a789082601-kube-api-access-g4mg8\") pod \"neutron-85b968f78-p2mst\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:58 crc kubenswrapper[4748]: I0216 15:13:58.122528 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:13:58 crc kubenswrapper[4748]: I0216 15:13:58.327581 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf2f897f-cf20-489b-aba1-7fbaff4c21a0","Type":"ContainerStarted","Data":"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e"} Feb 16 15:13:58 crc kubenswrapper[4748]: I0216 15:13:58.339927 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c","Type":"ContainerStarted","Data":"3fd5484b8b191a8ccb0ff874e3919ab70ea0a88218acbfa0544d6e5f08edf606"} Feb 16 15:13:58 crc kubenswrapper[4748]: I0216 15:13:58.543049 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hr4l4"] Feb 16 15:13:58 crc kubenswrapper[4748]: W0216 15:13:58.974144 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod84c2e211_48cf_4dc8_8036_8dec6a355951.slice/crio-34b4ecae7f53aad88827b1030e4ee6096c9333a820a208da1a6cd1a05c991898 WatchSource:0}: Error finding container 34b4ecae7f53aad88827b1030e4ee6096c9333a820a208da1a6cd1a05c991898: Status 404 returned error can't find the container with id 34b4ecae7f53aad88827b1030e4ee6096c9333a820a208da1a6cd1a05c991898 Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.399562 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf2f897f-cf20-489b-aba1-7fbaff4c21a0","Type":"ContainerStarted","Data":"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96"} Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.400916 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-httpd" containerID="cri-o://890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96" gracePeriod=30 Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.401051 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-log" containerID="cri-o://2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e" gracePeriod=30 Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.404254 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" event={"ID":"84c2e211-48cf-4dc8-8036-8dec6a355951","Type":"ContainerStarted","Data":"34b4ecae7f53aad88827b1030e4ee6096c9333a820a208da1a6cd1a05c991898"} Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.427382 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=23.427364022 podStartE2EDuration="23.427364022s" podCreationTimestamp="2026-02-16 15:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:13:59.422173975 +0000 UTC m=+1265.113843024" watchObservedRunningTime="2026-02-16 15:13:59.427364022 +0000 UTC m=+1265.119033061" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.428107 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c","Type":"ContainerStarted","Data":"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975"} Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.700051 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-85b968f78-p2mst"] Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.751895 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b9d456665-xjt6m"] Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.754985 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.757310 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.757497 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 16 15:13:59 crc kubenswrapper[4748]: W0216 15:13:59.764271 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5828c48d_11da_4512_9c8d_75a789082601.slice/crio-d1b6b153d971b37db47bc054b3e8c5301a012509c71deb7f157e0890c03218a1 WatchSource:0}: Error finding container d1b6b153d971b37db47bc054b3e8c5301a012509c71deb7f157e0890c03218a1: Status 404 returned error can't find the container with id d1b6b153d971b37db47bc054b3e8c5301a012509c71deb7f157e0890c03218a1 Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.774594 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b9d456665-xjt6m"] Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.892209 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-ovndb-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.892636 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-combined-ca-bundle\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.892837 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-internal-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.893201 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-public-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.893269 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-httpd-config\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.893314 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-config\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.893340 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4lnr\" (UniqueName: \"kubernetes.io/projected/3938329a-9d91-481e-9993-09917f2c7686-kube-api-access-k4lnr\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.995885 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-public-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.995946 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-httpd-config\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.995976 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-config\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.996023 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4lnr\" (UniqueName: \"kubernetes.io/projected/3938329a-9d91-481e-9993-09917f2c7686-kube-api-access-k4lnr\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.996136 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-ovndb-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.996208 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-combined-ca-bundle\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:13:59 crc kubenswrapper[4748]: I0216 15:13:59.996281 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-internal-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.004247 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-public-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.004326 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-internal-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.006814 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-httpd-config\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.007205 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-config\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.010416 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-ovndb-tls-certs\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.015646 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-combined-ca-bundle\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.019704 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4lnr\" (UniqueName: \"kubernetes.io/projected/3938329a-9d91-481e-9993-09917f2c7686-kube-api-access-k4lnr\") pod \"neutron-7b9d456665-xjt6m\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.094304 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.399796 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.484694 4748 generic.go:334] "Generic (PLEG): container finished" podID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerID="890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96" exitCode=0 Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.484739 4748 generic.go:334] "Generic (PLEG): container finished" podID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerID="2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e" exitCode=143 Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.484806 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf2f897f-cf20-489b-aba1-7fbaff4c21a0","Type":"ContainerDied","Data":"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.484833 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf2f897f-cf20-489b-aba1-7fbaff4c21a0","Type":"ContainerDied","Data":"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.484845 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"bf2f897f-cf20-489b-aba1-7fbaff4c21a0","Type":"ContainerDied","Data":"8089855d11b8c49116c07497869211b83f8b7cdda862a5cf616c32a971f7a182"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.484860 4748 scope.go:117] "RemoveContainer" containerID="890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.484991 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.490808 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerStarted","Data":"0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.501202 4748 generic.go:334] "Generic (PLEG): container finished" podID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerID="529f11fe66be21b6ad13047387e29888f27a83ddb000433c36e13f0dc298583f" exitCode=0 Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.502308 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" event={"ID":"84c2e211-48cf-4dc8-8036-8dec6a355951","Type":"ContainerDied","Data":"529f11fe66be21b6ad13047387e29888f27a83ddb000433c36e13f0dc298583f"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.508495 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-logs\") pod \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.508637 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.508805 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-config-data\") pod \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.508845 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-combined-ca-bundle\") pod \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.508921 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-httpd-run\") pod \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.508998 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-scripts\") pod \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.509067 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfxsz\" (UniqueName: \"kubernetes.io/projected/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-kube-api-access-lfxsz\") pod \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\" (UID: \"bf2f897f-cf20-489b-aba1-7fbaff4c21a0\") " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.510671 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-logs" (OuterVolumeSpecName: "logs") pod "bf2f897f-cf20-489b-aba1-7fbaff4c21a0" (UID: "bf2f897f-cf20-489b-aba1-7fbaff4c21a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.511353 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "bf2f897f-cf20-489b-aba1-7fbaff4c21a0" (UID: "bf2f897f-cf20-489b-aba1-7fbaff4c21a0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.512853 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-log" containerID="cri-o://951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975" gracePeriod=30 Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.512933 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c","Type":"ContainerStarted","Data":"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.512988 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-httpd" containerID="cri-o://ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7" gracePeriod=30 Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.519184 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-scripts" (OuterVolumeSpecName: "scripts") pod "bf2f897f-cf20-489b-aba1-7fbaff4c21a0" (UID: "bf2f897f-cf20-489b-aba1-7fbaff4c21a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.527013 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-kube-api-access-lfxsz" (OuterVolumeSpecName: "kube-api-access-lfxsz") pod "bf2f897f-cf20-489b-aba1-7fbaff4c21a0" (UID: "bf2f897f-cf20-489b-aba1-7fbaff4c21a0"). InnerVolumeSpecName "kube-api-access-lfxsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.533583 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85b968f78-p2mst" event={"ID":"5828c48d-11da-4512-9c8d-75a789082601","Type":"ContainerStarted","Data":"3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.533628 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85b968f78-p2mst" event={"ID":"5828c48d-11da-4512-9c8d-75a789082601","Type":"ContainerStarted","Data":"d1b6b153d971b37db47bc054b3e8c5301a012509c71deb7f157e0890c03218a1"} Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.543106 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f" (OuterVolumeSpecName: "glance") pod "bf2f897f-cf20-489b-aba1-7fbaff4c21a0" (UID: "bf2f897f-cf20-489b-aba1-7fbaff4c21a0"). InnerVolumeSpecName "pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.560757 4748 scope.go:117] "RemoveContainer" containerID="2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.568826 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf2f897f-cf20-489b-aba1-7fbaff4c21a0" (UID: "bf2f897f-cf20-489b-aba1-7fbaff4c21a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.583132 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=24.583109156 podStartE2EDuration="24.583109156s" podCreationTimestamp="2026-02-16 15:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:00.552023244 +0000 UTC m=+1266.243692283" watchObservedRunningTime="2026-02-16 15:14:00.583109156 +0000 UTC m=+1266.274778195" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.614327 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.614612 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfxsz\" (UniqueName: \"kubernetes.io/projected/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-kube-api-access-lfxsz\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.614627 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.614657 4748 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") on node \"crc\" " Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.614669 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.614679 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.703971 4748 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.704147 4748 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f") on node "crc" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.716091 4748 reconciler_common.go:293] "Volume detached for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.763420 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-config-data" (OuterVolumeSpecName: "config-data") pod "bf2f897f-cf20-489b-aba1-7fbaff4c21a0" (UID: "bf2f897f-cf20-489b-aba1-7fbaff4c21a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.768972 4748 scope.go:117] "RemoveContainer" containerID="890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96" Feb 16 15:14:00 crc kubenswrapper[4748]: E0216 15:14:00.769494 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96\": container with ID starting with 890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96 not found: ID does not exist" containerID="890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.769565 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96"} err="failed to get container status \"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96\": rpc error: code = NotFound desc = could not find container \"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96\": container with ID starting with 890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96 not found: ID does not exist" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.769586 4748 scope.go:117] "RemoveContainer" containerID="2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e" Feb 16 15:14:00 crc kubenswrapper[4748]: E0216 15:14:00.769869 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e\": container with ID starting with 2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e not found: ID does not exist" containerID="2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.769895 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e"} err="failed to get container status \"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e\": rpc error: code = NotFound desc = could not find container \"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e\": container with ID starting with 2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e not found: ID does not exist" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.769913 4748 scope.go:117] "RemoveContainer" containerID="890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.770330 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96"} err="failed to get container status \"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96\": rpc error: code = NotFound desc = could not find container \"890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96\": container with ID starting with 890c7e8268a5f8fc3c3e9d755e4a6e04a2b0aff606d1376ddf1710d07eaa9f96 not found: ID does not exist" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.770355 4748 scope.go:117] "RemoveContainer" containerID="2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.770643 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e"} err="failed to get container status \"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e\": rpc error: code = NotFound desc = could not find container \"2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e\": container with ID starting with 2827d8297c31b38556f5589091e1654b7f7d723aa159b5a0d03ed73cad4abd7e not found: ID does not exist" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.817680 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf2f897f-cf20-489b-aba1-7fbaff4c21a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.891767 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b9d456665-xjt6m"] Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.917084 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.950249 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.961474 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:00 crc kubenswrapper[4748]: E0216 15:14:00.961938 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-httpd" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.961967 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-httpd" Feb 16 15:14:00 crc kubenswrapper[4748]: E0216 15:14:00.961993 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-log" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.962007 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-log" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.962194 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-log" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.962222 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" containerName="glance-httpd" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.963282 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.969026 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.969792 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:14:00 crc kubenswrapper[4748]: I0216 15:14:00.971351 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:00 crc kubenswrapper[4748]: E0216 15:14:00.996038 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.042707 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2f897f-cf20-489b-aba1-7fbaff4c21a0" path="/var/lib/kubelet/pods/bf2f897f-cf20-489b-aba1-7fbaff4c21a0/volumes" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126038 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126214 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126253 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42f65\" (UniqueName: \"kubernetes.io/projected/d1f72feb-de93-4fb2-a936-b1e69c347a7b-kube-api-access-42f65\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126314 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126403 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126430 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126677 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.126768 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.233463 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.233583 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.233652 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.233863 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.233917 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42f65\" (UniqueName: \"kubernetes.io/projected/d1f72feb-de93-4fb2-a936-b1e69c347a7b-kube-api-access-42f65\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.233938 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.234064 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.234084 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.235991 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.236261 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.238941 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.240220 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.240252 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f58f00d51b81be0c55943aba0909dac7acd0e6134cb135d989bed8b6a75cb071/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.242109 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.246156 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.249211 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.262648 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42f65\" (UniqueName: \"kubernetes.io/projected/d1f72feb-de93-4fb2-a936-b1e69c347a7b-kube-api-access-42f65\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.327670 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.485165 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.541134 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-httpd-run\") pod \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.541259 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9w8r\" (UniqueName: \"kubernetes.io/projected/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-kube-api-access-g9w8r\") pod \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.541470 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.541553 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-scripts\") pod \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.541606 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-config-data\") pod \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.541644 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-logs\") pod \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.541747 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-combined-ca-bundle\") pod \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\" (UID: \"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c\") " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.548900 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" (UID: "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.549891 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-logs" (OuterVolumeSpecName: "logs") pod "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" (UID: "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.553496 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-scripts" (OuterVolumeSpecName: "scripts") pod "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" (UID: "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.556061 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-kube-api-access-g9w8r" (OuterVolumeSpecName: "kube-api-access-g9w8r") pod "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" (UID: "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c"). InnerVolumeSpecName "kube-api-access-g9w8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.595013 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.597074 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" event={"ID":"84c2e211-48cf-4dc8-8036-8dec6a355951","Type":"ContainerStarted","Data":"d78350a2c9f6852a7063330bfff32d91286abf8f41aa97600cb58a3b35561a3d"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.597844 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.630761 4748 generic.go:334] "Generic (PLEG): container finished" podID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerID="ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7" exitCode=0 Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.630793 4748 generic.go:334] "Generic (PLEG): container finished" podID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerID="951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975" exitCode=143 Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.630861 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c","Type":"ContainerDied","Data":"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.630888 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c","Type":"ContainerDied","Data":"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.630912 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"35b48a7a-a52a-4f9a-a928-5ed7901a5f7c","Type":"ContainerDied","Data":"3fd5484b8b191a8ccb0ff874e3919ab70ea0a88218acbfa0544d6e5f08edf606"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.630927 4748 scope.go:117] "RemoveContainer" containerID="ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.631067 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.636445 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" podStartSLOduration=4.636371839 podStartE2EDuration="4.636371839s" podCreationTimestamp="2026-02-16 15:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:01.631566351 +0000 UTC m=+1267.323235400" watchObservedRunningTime="2026-02-16 15:14:01.636371839 +0000 UTC m=+1267.328040878" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.646947 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9d456665-xjt6m" event={"ID":"3938329a-9d91-481e-9993-09917f2c7686","Type":"ContainerStarted","Data":"bed6d02f53dd73b01cb8e00702a6ecbf387f6806345e2ae8294b7b0fc3a8d9af"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.647187 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9d456665-xjt6m" event={"ID":"3938329a-9d91-481e-9993-09917f2c7686","Type":"ContainerStarted","Data":"1d8ba83511e7b3145fd3469bbeebd41caa58bac6caec18aedbe55cb40a1a3187"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.648453 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9w8r\" (UniqueName: \"kubernetes.io/projected/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-kube-api-access-g9w8r\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.648492 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.648503 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.648531 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.667040 4748 generic.go:334] "Generic (PLEG): container finished" podID="1b3dfe98-47ed-4b69-b461-f0f9185e4697" containerID="e479790bbaba4129511d3075304fad8e857f596d77df601c7464e9f4ff714ffe" exitCode=0 Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.667179 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5jqr7" event={"ID":"1b3dfe98-47ed-4b69-b461-f0f9185e4697","Type":"ContainerDied","Data":"e479790bbaba4129511d3075304fad8e857f596d77df601c7464e9f4ff714ffe"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.676599 4748 scope.go:117] "RemoveContainer" containerID="951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.694365 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc" (OuterVolumeSpecName: "glance") pod "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" (UID: "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c"). InnerVolumeSpecName "pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.694636 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85b968f78-p2mst" event={"ID":"5828c48d-11da-4512-9c8d-75a789082601","Type":"ContainerStarted","Data":"d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596"} Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.694882 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.735480 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-85b968f78-p2mst" podStartSLOduration=4.735454526 podStartE2EDuration="4.735454526s" podCreationTimestamp="2026-02-16 15:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:01.721881763 +0000 UTC m=+1267.413550812" watchObservedRunningTime="2026-02-16 15:14:01.735454526 +0000 UTC m=+1267.427123565" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.736905 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" (UID: "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.737680 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-config-data" (OuterVolumeSpecName: "config-data") pod "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" (UID: "35b48a7a-a52a-4f9a-a928-5ed7901a5f7c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.750892 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.750929 4748 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") on node \"crc\" " Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.750942 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.761449 4748 scope.go:117] "RemoveContainer" containerID="ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7" Feb 16 15:14:01 crc kubenswrapper[4748]: E0216 15:14:01.766432 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7\": container with ID starting with ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7 not found: ID does not exist" containerID="ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.766658 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7"} err="failed to get container status \"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7\": rpc error: code = NotFound desc = could not find container \"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7\": container with ID starting with ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7 not found: ID does not exist" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.766692 4748 scope.go:117] "RemoveContainer" containerID="951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975" Feb 16 15:14:01 crc kubenswrapper[4748]: E0216 15:14:01.771224 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975\": container with ID starting with 951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975 not found: ID does not exist" containerID="951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.771264 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975"} err="failed to get container status \"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975\": rpc error: code = NotFound desc = could not find container \"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975\": container with ID starting with 951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975 not found: ID does not exist" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.771291 4748 scope.go:117] "RemoveContainer" containerID="ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.771956 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7"} err="failed to get container status \"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7\": rpc error: code = NotFound desc = could not find container \"ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7\": container with ID starting with ab3b5d3a082101edd94034275412d89bbec3e6c53f51ffd9ba382cbf6a9cfcf7 not found: ID does not exist" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.771972 4748 scope.go:117] "RemoveContainer" containerID="951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.772683 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975"} err="failed to get container status \"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975\": rpc error: code = NotFound desc = could not find container \"951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975\": container with ID starting with 951ee7d5a00729f6440c237a743e4e6c23fc5eda7e00279d94cd978834db5975 not found: ID does not exist" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.777859 4748 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.778875 4748 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc") on node "crc" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.859883 4748 reconciler_common.go:293] "Volume detached for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:01 crc kubenswrapper[4748]: I0216 15:14:01.988402 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.005552 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.021101 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:02 crc kubenswrapper[4748]: E0216 15:14:02.021583 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-httpd" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.021602 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-httpd" Feb 16 15:14:02 crc kubenswrapper[4748]: E0216 15:14:02.021638 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-log" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.021646 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-log" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.021909 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-log" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.021940 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" containerName="glance-httpd" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.023039 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.026103 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.030944 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.061221 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.175652 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.176078 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-logs\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.176143 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.176180 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.176203 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.176232 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clf4s\" (UniqueName: \"kubernetes.io/projected/6e869859-8d55-4b07-90cf-6936061845a0-kube-api-access-clf4s\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.176268 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.176345 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.278924 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279041 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-logs\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279099 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279126 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279146 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279174 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clf4s\" (UniqueName: \"kubernetes.io/projected/6e869859-8d55-4b07-90cf-6936061845a0-kube-api-access-clf4s\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279207 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279273 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.279779 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-logs\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.282946 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.286224 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.286297 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d294a5b1f89c6c34c7d61f2a887a4dff0b1fa943cd0603a1d259c16f2f816998/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.286263 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-config-data\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.291080 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-scripts\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.291637 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.301657 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clf4s\" (UniqueName: \"kubernetes.io/projected/6e869859-8d55-4b07-90cf-6936061845a0-kube-api-access-clf4s\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.302767 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.317537 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:14:02 crc kubenswrapper[4748]: W0216 15:14:02.317740 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1f72feb_de93_4fb2_a936_b1e69c347a7b.slice/crio-0feb7fb0a091645a3633d680f57cfafe9b4805fa4b46635de6c91772f2e08cc4 WatchSource:0}: Error finding container 0feb7fb0a091645a3633d680f57cfafe9b4805fa4b46635de6c91772f2e08cc4: Status 404 returned error can't find the container with id 0feb7fb0a091645a3633d680f57cfafe9b4805fa4b46635de6c91772f2e08cc4 Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.354698 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.362538 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.732594 4748 generic.go:334] "Generic (PLEG): container finished" podID="011c3199-3e59-4794-ab44-de1abe4675a0" containerID="fed68d69696b1eb5f344860cd4c49a6f421ff1adc0c21ea6ecaa77c4ac1308da" exitCode=0 Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.732650 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7tc2j" event={"ID":"011c3199-3e59-4794-ab44-de1abe4675a0","Type":"ContainerDied","Data":"fed68d69696b1eb5f344860cd4c49a6f421ff1adc0c21ea6ecaa77c4ac1308da"} Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.738542 4748 generic.go:334] "Generic (PLEG): container finished" podID="7be12071-c948-4de3-8d3e-d21df02dfa91" containerID="eb8a6b756e2112fc2bcaa43573594481ddbe1f7a26a74a3dc3ae3affcd7735dc" exitCode=0 Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.738618 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-msqwg" event={"ID":"7be12071-c948-4de3-8d3e-d21df02dfa91","Type":"ContainerDied","Data":"eb8a6b756e2112fc2bcaa43573594481ddbe1f7a26a74a3dc3ae3affcd7735dc"} Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.747582 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9d456665-xjt6m" event={"ID":"3938329a-9d91-481e-9993-09917f2c7686","Type":"ContainerStarted","Data":"4c5d1586b5c082cdfc2a2efc11e6c42eecd94344390310c446a7cf8f5c626ab7"} Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.748411 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.758673 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f72feb-de93-4fb2-a936-b1e69c347a7b","Type":"ContainerStarted","Data":"0feb7fb0a091645a3633d680f57cfafe9b4805fa4b46635de6c91772f2e08cc4"} Feb 16 15:14:02 crc kubenswrapper[4748]: I0216 15:14:02.794267 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b9d456665-xjt6m" podStartSLOduration=3.794239533 podStartE2EDuration="3.794239533s" podCreationTimestamp="2026-02-16 15:13:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:02.770726237 +0000 UTC m=+1268.462395286" watchObservedRunningTime="2026-02-16 15:14:02.794239533 +0000 UTC m=+1268.485908582" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.041208 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35b48a7a-a52a-4f9a-a928-5ed7901a5f7c" path="/var/lib/kubelet/pods/35b48a7a-a52a-4f9a-a928-5ed7901a5f7c/volumes" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.042233 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:14:03 crc kubenswrapper[4748]: W0216 15:14:03.045165 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e869859_8d55_4b07_90cf_6936061845a0.slice/crio-43ae2f4ff0eee4427ac918bc4319a9a394595efe61cb5b324f8cc2a0a177ba17 WatchSource:0}: Error finding container 43ae2f4ff0eee4427ac918bc4319a9a394595efe61cb5b324f8cc2a0a177ba17: Status 404 returned error can't find the container with id 43ae2f4ff0eee4427ac918bc4319a9a394595efe61cb5b324f8cc2a0a177ba17 Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.397225 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5jqr7" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.529065 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3dfe98-47ed-4b69-b461-f0f9185e4697-logs\") pod \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.529246 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvqd8\" (UniqueName: \"kubernetes.io/projected/1b3dfe98-47ed-4b69-b461-f0f9185e4697-kube-api-access-hvqd8\") pod \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.529354 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-config-data\") pod \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.529415 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-scripts\") pod \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.529478 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-combined-ca-bundle\") pod \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\" (UID: \"1b3dfe98-47ed-4b69-b461-f0f9185e4697\") " Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.529583 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b3dfe98-47ed-4b69-b461-f0f9185e4697-logs" (OuterVolumeSpecName: "logs") pod "1b3dfe98-47ed-4b69-b461-f0f9185e4697" (UID: "1b3dfe98-47ed-4b69-b461-f0f9185e4697"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.530555 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b3dfe98-47ed-4b69-b461-f0f9185e4697-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.549773 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b3dfe98-47ed-4b69-b461-f0f9185e4697-kube-api-access-hvqd8" (OuterVolumeSpecName: "kube-api-access-hvqd8") pod "1b3dfe98-47ed-4b69-b461-f0f9185e4697" (UID: "1b3dfe98-47ed-4b69-b461-f0f9185e4697"). InnerVolumeSpecName "kube-api-access-hvqd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.550075 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-scripts" (OuterVolumeSpecName: "scripts") pod "1b3dfe98-47ed-4b69-b461-f0f9185e4697" (UID: "1b3dfe98-47ed-4b69-b461-f0f9185e4697"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.592146 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b3dfe98-47ed-4b69-b461-f0f9185e4697" (UID: "1b3dfe98-47ed-4b69-b461-f0f9185e4697"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.617847 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-config-data" (OuterVolumeSpecName: "config-data") pod "1b3dfe98-47ed-4b69-b461-f0f9185e4697" (UID: "1b3dfe98-47ed-4b69-b461-f0f9185e4697"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.632952 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvqd8\" (UniqueName: \"kubernetes.io/projected/1b3dfe98-47ed-4b69-b461-f0f9185e4697-kube-api-access-hvqd8\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.632985 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.632995 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.633005 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b3dfe98-47ed-4b69-b461-f0f9185e4697-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.803443 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5jqr7" event={"ID":"1b3dfe98-47ed-4b69-b461-f0f9185e4697","Type":"ContainerDied","Data":"c863e54f394bd19f82129dede35095c921bb6e6547a301e68212d735b08bcb3e"} Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.803491 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c863e54f394bd19f82129dede35095c921bb6e6547a301e68212d735b08bcb3e" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.803532 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5jqr7" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.828834 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e869859-8d55-4b07-90cf-6936061845a0","Type":"ContainerStarted","Data":"43ae2f4ff0eee4427ac918bc4319a9a394595efe61cb5b324f8cc2a0a177ba17"} Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.842528 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55548d48fb-q5fcz"] Feb 16 15:14:03 crc kubenswrapper[4748]: E0216 15:14:03.843620 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b3dfe98-47ed-4b69-b461-f0f9185e4697" containerName="placement-db-sync" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.843637 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b3dfe98-47ed-4b69-b461-f0f9185e4697" containerName="placement-db-sync" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.848182 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b3dfe98-47ed-4b69-b461-f0f9185e4697" containerName="placement-db-sync" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.851699 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.859315 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f72feb-de93-4fb2-a936-b1e69c347a7b","Type":"ContainerStarted","Data":"42ff2e37f171cbbdb115d789d95c7d0a9d7f806fc398467233c037595dddbdc6"} Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.859927 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.860200 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.860412 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.861179 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.861566 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-nvmt4" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.871055 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55548d48fb-q5fcz"] Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.940183 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-public-tls-certs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.940252 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl5mm\" (UniqueName: \"kubernetes.io/projected/2d1d6f88-2878-4893-a7de-e671f7e25ad9-kube-api-access-nl5mm\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.940349 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1d6f88-2878-4893-a7de-e671f7e25ad9-logs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.940418 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-internal-tls-certs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.940644 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-combined-ca-bundle\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.940682 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-config-data\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:03 crc kubenswrapper[4748]: I0216 15:14:03.940755 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-scripts\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.042956 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-scripts\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.043051 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-public-tls-certs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.043091 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl5mm\" (UniqueName: \"kubernetes.io/projected/2d1d6f88-2878-4893-a7de-e671f7e25ad9-kube-api-access-nl5mm\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.043176 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1d6f88-2878-4893-a7de-e671f7e25ad9-logs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.043255 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-internal-tls-certs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.043295 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-combined-ca-bundle\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.043331 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-config-data\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.044164 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1d6f88-2878-4893-a7de-e671f7e25ad9-logs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.052213 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-combined-ca-bundle\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.053045 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-internal-tls-certs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.066976 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-scripts\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.084304 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl5mm\" (UniqueName: \"kubernetes.io/projected/2d1d6f88-2878-4893-a7de-e671f7e25ad9-kube-api-access-nl5mm\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.088559 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-config-data\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.089475 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-public-tls-certs\") pod \"placement-55548d48fb-q5fcz\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.192937 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:04 crc kubenswrapper[4748]: I0216 15:14:04.876199 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e869859-8d55-4b07-90cf-6936061845a0","Type":"ContainerStarted","Data":"d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee"} Feb 16 15:14:06 crc kubenswrapper[4748]: I0216 15:14:06.900209 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f72feb-de93-4fb2-a936-b1e69c347a7b","Type":"ContainerStarted","Data":"61ef039aadfd8586ab9a914e4454b54661e24f1094dc63a4d50ac8b29fef7ecd"} Feb 16 15:14:07 crc kubenswrapper[4748]: I0216 15:14:07.785872 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:14:07 crc kubenswrapper[4748]: I0216 15:14:07.883350 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mll8s"] Feb 16 15:14:07 crc kubenswrapper[4748]: I0216 15:14:07.883901 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerName="dnsmasq-dns" containerID="cri-o://0c657f3cbcd4db7170a8d6bcc0f391a5d4d58edd72d4401850feded56998a6f0" gracePeriod=10 Feb 16 15:14:08 crc kubenswrapper[4748]: I0216 15:14:08.921555 4748 generic.go:334] "Generic (PLEG): container finished" podID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerID="0c657f3cbcd4db7170a8d6bcc0f391a5d4d58edd72d4401850feded56998a6f0" exitCode=0 Feb 16 15:14:08 crc kubenswrapper[4748]: I0216 15:14:08.921648 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" event={"ID":"f13e2e6e-7e41-4da9-868f-94d41efe273e","Type":"ContainerDied","Data":"0c657f3cbcd4db7170a8d6bcc0f391a5d4d58edd72d4401850feded56998a6f0"} Feb 16 15:14:08 crc kubenswrapper[4748]: I0216 15:14:08.942486 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.94246414 podStartE2EDuration="8.94246414s" podCreationTimestamp="2026-02-16 15:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:08.93878906 +0000 UTC m=+1274.630458099" watchObservedRunningTime="2026-02-16 15:14:08.94246414 +0000 UTC m=+1274.634133179" Feb 16 15:14:10 crc kubenswrapper[4748]: I0216 15:14:10.432137 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Feb 16 15:14:10 crc kubenswrapper[4748]: I0216 15:14:10.889458 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:14:10 crc kubenswrapper[4748]: I0216 15:14:10.928544 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:14:10 crc kubenswrapper[4748]: I0216 15:14:10.989297 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7tc2j" Feb 16 15:14:10 crc kubenswrapper[4748]: I0216 15:14:10.989985 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7tc2j" event={"ID":"011c3199-3e59-4794-ab44-de1abe4675a0","Type":"ContainerDied","Data":"b8a709251d8921937ada091214ec685b1d4f1ecc5c0ef625514b5e9b9655a05e"} Feb 16 15:14:10 crc kubenswrapper[4748]: I0216 15:14:10.990037 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8a709251d8921937ada091214ec685b1d4f1ecc5c0ef625514b5e9b9655a05e" Feb 16 15:14:10 crc kubenswrapper[4748]: I0216 15:14:10.991749 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.003424 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-msqwg" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.018659 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-config-data\") pod \"7be12071-c948-4de3-8d3e-d21df02dfa91\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.018741 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-db-sync-config-data\") pod \"011c3199-3e59-4794-ab44-de1abe4675a0\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.018770 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlmfg\" (UniqueName: \"kubernetes.io/projected/011c3199-3e59-4794-ab44-de1abe4675a0-kube-api-access-dlmfg\") pod \"011c3199-3e59-4794-ab44-de1abe4675a0\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.018834 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-combined-ca-bundle\") pod \"7be12071-c948-4de3-8d3e-d21df02dfa91\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.018896 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-combined-ca-bundle\") pod \"011c3199-3e59-4794-ab44-de1abe4675a0\" (UID: \"011c3199-3e59-4794-ab44-de1abe4675a0\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.018964 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-credential-keys\") pod \"7be12071-c948-4de3-8d3e-d21df02dfa91\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.019004 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-scripts\") pod \"7be12071-c948-4de3-8d3e-d21df02dfa91\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.019065 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s2lf\" (UniqueName: \"kubernetes.io/projected/7be12071-c948-4de3-8d3e-d21df02dfa91-kube-api-access-7s2lf\") pod \"7be12071-c948-4de3-8d3e-d21df02dfa91\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.019104 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-fernet-keys\") pod \"7be12071-c948-4de3-8d3e-d21df02dfa91\" (UID: \"7be12071-c948-4de3-8d3e-d21df02dfa91\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.043555 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-msqwg" event={"ID":"7be12071-c948-4de3-8d3e-d21df02dfa91","Type":"ContainerDied","Data":"2a79122f3ed8c71786870ef43e63e5973c4ffe57d3f551dd57d773f0792a6f01"} Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.044269 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a79122f3ed8c71786870ef43e63e5973c4ffe57d3f551dd57d773f0792a6f01" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.045306 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7be12071-c948-4de3-8d3e-d21df02dfa91" (UID: "7be12071-c948-4de3-8d3e-d21df02dfa91"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.045632 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-scripts" (OuterVolumeSpecName: "scripts") pod "7be12071-c948-4de3-8d3e-d21df02dfa91" (UID: "7be12071-c948-4de3-8d3e-d21df02dfa91"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.045811 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be12071-c948-4de3-8d3e-d21df02dfa91-kube-api-access-7s2lf" (OuterVolumeSpecName: "kube-api-access-7s2lf") pod "7be12071-c948-4de3-8d3e-d21df02dfa91" (UID: "7be12071-c948-4de3-8d3e-d21df02dfa91"). InnerVolumeSpecName "kube-api-access-7s2lf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.045842 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "011c3199-3e59-4794-ab44-de1abe4675a0" (UID: "011c3199-3e59-4794-ab44-de1abe4675a0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.045929 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7be12071-c948-4de3-8d3e-d21df02dfa91" (UID: "7be12071-c948-4de3-8d3e-d21df02dfa91"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.046190 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/011c3199-3e59-4794-ab44-de1abe4675a0-kube-api-access-dlmfg" (OuterVolumeSpecName: "kube-api-access-dlmfg") pod "011c3199-3e59-4794-ab44-de1abe4675a0" (UID: "011c3199-3e59-4794-ab44-de1abe4675a0"). InnerVolumeSpecName "kube-api-access-dlmfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.065694 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7be12071-c948-4de3-8d3e-d21df02dfa91" (UID: "7be12071-c948-4de3-8d3e-d21df02dfa91"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.072643 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-config-data" (OuterVolumeSpecName: "config-data") pod "7be12071-c948-4de3-8d3e-d21df02dfa91" (UID: "7be12071-c948-4de3-8d3e-d21df02dfa91"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.079813 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "011c3199-3e59-4794-ab44-de1abe4675a0" (UID: "011c3199-3e59-4794-ab44-de1abe4675a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.121699 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-config\") pod \"f13e2e6e-7e41-4da9-868f-94d41efe273e\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.121909 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmbfx\" (UniqueName: \"kubernetes.io/projected/f13e2e6e-7e41-4da9-868f-94d41efe273e-kube-api-access-cmbfx\") pod \"f13e2e6e-7e41-4da9-868f-94d41efe273e\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.121956 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-swift-storage-0\") pod \"f13e2e6e-7e41-4da9-868f-94d41efe273e\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.122023 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-nb\") pod \"f13e2e6e-7e41-4da9-868f-94d41efe273e\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.122121 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-svc\") pod \"f13e2e6e-7e41-4da9-868f-94d41efe273e\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.122434 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-sb\") pod \"f13e2e6e-7e41-4da9-868f-94d41efe273e\" (UID: \"f13e2e6e-7e41-4da9-868f-94d41efe273e\") " Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123196 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123216 4748 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123228 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123241 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s2lf\" (UniqueName: \"kubernetes.io/projected/7be12071-c948-4de3-8d3e-d21df02dfa91-kube-api-access-7s2lf\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123255 4748 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123267 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123278 4748 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/011c3199-3e59-4794-ab44-de1abe4675a0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123290 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlmfg\" (UniqueName: \"kubernetes.io/projected/011c3199-3e59-4794-ab44-de1abe4675a0-kube-api-access-dlmfg\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.123300 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be12071-c948-4de3-8d3e-d21df02dfa91-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.124965 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f13e2e6e-7e41-4da9-868f-94d41efe273e-kube-api-access-cmbfx" (OuterVolumeSpecName: "kube-api-access-cmbfx") pod "f13e2e6e-7e41-4da9-868f-94d41efe273e" (UID: "f13e2e6e-7e41-4da9-868f-94d41efe273e"). InnerVolumeSpecName "kube-api-access-cmbfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.154042 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55548d48fb-q5fcz"] Feb 16 15:14:11 crc kubenswrapper[4748]: W0216 15:14:11.154140 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d6f88_2878_4893_a7de_e671f7e25ad9.slice/crio-da86eb5e43852667a6b5fbbd3779c52dc4236663ed27de533ecf87110f989cbd WatchSource:0}: Error finding container da86eb5e43852667a6b5fbbd3779c52dc4236663ed27de533ecf87110f989cbd: Status 404 returned error can't find the container with id da86eb5e43852667a6b5fbbd3779c52dc4236663ed27de533ecf87110f989cbd Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.179835 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f13e2e6e-7e41-4da9-868f-94d41efe273e" (UID: "f13e2e6e-7e41-4da9-868f-94d41efe273e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.193564 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f13e2e6e-7e41-4da9-868f-94d41efe273e" (UID: "f13e2e6e-7e41-4da9-868f-94d41efe273e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.199509 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f13e2e6e-7e41-4da9-868f-94d41efe273e" (UID: "f13e2e6e-7e41-4da9-868f-94d41efe273e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.208180 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f13e2e6e-7e41-4da9-868f-94d41efe273e" (UID: "f13e2e6e-7e41-4da9-868f-94d41efe273e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.216453 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-config" (OuterVolumeSpecName: "config") pod "f13e2e6e-7e41-4da9-868f-94d41efe273e" (UID: "f13e2e6e-7e41-4da9-868f-94d41efe273e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.224858 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.224899 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmbfx\" (UniqueName: \"kubernetes.io/projected/f13e2e6e-7e41-4da9-868f-94d41efe273e-kube-api-access-cmbfx\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.224911 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.224920 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.224928 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.224939 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f13e2e6e-7e41-4da9-868f-94d41efe273e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.597152 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.597499 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.645010 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:11 crc kubenswrapper[4748]: I0216 15:14:11.653110 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.016700 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e869859-8d55-4b07-90cf-6936061845a0","Type":"ContainerStarted","Data":"3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92"} Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.024925 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55548d48fb-q5fcz" event={"ID":"2d1d6f88-2878-4893-a7de-e671f7e25ad9","Type":"ContainerStarted","Data":"e770cb4557633c64132dede327871bade81ff4f743e5a53a45f71a9761ae0b7a"} Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.024987 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55548d48fb-q5fcz" event={"ID":"2d1d6f88-2878-4893-a7de-e671f7e25ad9","Type":"ContainerStarted","Data":"9c23b9e8b7678b473e739587720b0501416475bd073be069dc36caaa9725765f"} Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.025001 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55548d48fb-q5fcz" event={"ID":"2d1d6f88-2878-4893-a7de-e671f7e25ad9","Type":"ContainerStarted","Data":"da86eb5e43852667a6b5fbbd3779c52dc4236663ed27de533ecf87110f989cbd"} Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.025060 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.025189 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.039618 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" event={"ID":"f13e2e6e-7e41-4da9-868f-94d41efe273e","Type":"ContainerDied","Data":"7fdf2c7a39f825df6235b0fdd71e0ebc34346e4dd464f553c1a133e3940deccb"} Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.039680 4748 scope.go:117] "RemoveContainer" containerID="0c657f3cbcd4db7170a8d6bcc0f391a5d4d58edd72d4401850feded56998a6f0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.039901 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-mll8s" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.041509 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=11.041492409 podStartE2EDuration="11.041492409s" podCreationTimestamp="2026-02-16 15:14:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:12.035454341 +0000 UTC m=+1277.727123400" watchObservedRunningTime="2026-02-16 15:14:12.041492409 +0000 UTC m=+1277.733161448" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.061936 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerStarted","Data":"407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34"} Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.070614 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5n8n8" event={"ID":"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71","Type":"ContainerStarted","Data":"0e833dcb4f3a0ebaaab025f6b0c1ad587e826c822f10215771faf1f3b4550cc6"} Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.071878 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.071916 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.079230 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-55548d48fb-q5fcz" podStartSLOduration=9.079203783 podStartE2EDuration="9.079203783s" podCreationTimestamp="2026-02-16 15:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:12.061388017 +0000 UTC m=+1277.753057066" watchObservedRunningTime="2026-02-16 15:14:12.079203783 +0000 UTC m=+1277.770872822" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.086420 4748 scope.go:117] "RemoveContainer" containerID="cfffec18996a9d562ea98cc29208cd14603b33febe02c823a876fd5940685ed7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.109524 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mll8s"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.144569 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-mll8s"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.146802 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-5n8n8" podStartSLOduration=5.200812055 podStartE2EDuration="44.146774989s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="2026-02-16 15:13:31.69682636 +0000 UTC m=+1237.388495399" lastFinishedPulling="2026-02-16 15:14:10.642789294 +0000 UTC m=+1276.334458333" observedRunningTime="2026-02-16 15:14:12.106333808 +0000 UTC m=+1277.798002847" watchObservedRunningTime="2026-02-16 15:14:12.146774989 +0000 UTC m=+1277.838444028" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.199209 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69b5b5cc64-hzmw7"] Feb 16 15:14:12 crc kubenswrapper[4748]: E0216 15:14:12.200095 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="011c3199-3e59-4794-ab44-de1abe4675a0" containerName="barbican-db-sync" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.200116 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="011c3199-3e59-4794-ab44-de1abe4675a0" containerName="barbican-db-sync" Feb 16 15:14:12 crc kubenswrapper[4748]: E0216 15:14:12.200146 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerName="dnsmasq-dns" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.200152 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerName="dnsmasq-dns" Feb 16 15:14:12 crc kubenswrapper[4748]: E0216 15:14:12.200161 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be12071-c948-4de3-8d3e-d21df02dfa91" containerName="keystone-bootstrap" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.200167 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be12071-c948-4de3-8d3e-d21df02dfa91" containerName="keystone-bootstrap" Feb 16 15:14:12 crc kubenswrapper[4748]: E0216 15:14:12.200181 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerName="init" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.200186 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerName="init" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.200404 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" containerName="dnsmasq-dns" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.200431 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="011c3199-3e59-4794-ab44-de1abe4675a0" containerName="barbican-db-sync" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.200444 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be12071-c948-4de3-8d3e-d21df02dfa91" containerName="keystone-bootstrap" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.201323 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.207090 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69b5b5cc64-hzmw7"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.207436 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cd9jb" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.207735 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.207909 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.207916 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.208266 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.209035 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250130 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-config-data\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250197 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-fernet-keys\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250236 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqfnm\" (UniqueName: \"kubernetes.io/projected/8bbee52e-c08f-417f-a7e9-d7c055c695e7-kube-api-access-qqfnm\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250280 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-public-tls-certs\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250298 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-combined-ca-bundle\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250321 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-scripts\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250381 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-credential-keys\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.250423 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-internal-tls-certs\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.329857 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6865b95f65-8tlsw"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.334151 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.336830 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-z5647" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.337087 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.338105 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.352688 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6865b95f65-8tlsw"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353631 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-config-data\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353676 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-fernet-keys\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353814 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqfnm\" (UniqueName: \"kubernetes.io/projected/8bbee52e-c08f-417f-a7e9-d7c055c695e7-kube-api-access-qqfnm\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353858 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-public-tls-certs\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353875 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-combined-ca-bundle\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353896 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-scripts\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353946 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-credential-keys\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.353992 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-internal-tls-certs\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.363869 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6d986fcbd8-tx2x5"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.365809 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.365940 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-internal-tls-certs\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.368424 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-scripts\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.368643 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.368664 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.368909 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.371881 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-fernet-keys\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.374264 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-public-tls-certs\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.382582 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-config-data\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.400212 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d986fcbd8-tx2x5"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.400939 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-credential-keys\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.408370 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bbee52e-c08f-417f-a7e9-d7c055c695e7-combined-ca-bundle\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.425670 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-p5sw7"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.427351 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.427415 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqfnm\" (UniqueName: \"kubernetes.io/projected/8bbee52e-c08f-417f-a7e9-d7c055c695e7-kube-api-access-qqfnm\") pod \"keystone-69b5b5cc64-hzmw7\" (UID: \"8bbee52e-c08f-417f-a7e9-d7c055c695e7\") " pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.457970 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-p5sw7"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.502898 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data-custom\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503001 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451cbfe0-adb6-42ed-bab0-61176a915b0d-logs\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503057 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data-custom\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503138 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be9599-5fcb-42cf-b799-e7264965a04c-logs\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503194 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503253 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503360 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7mgg\" (UniqueName: \"kubernetes.io/projected/451cbfe0-adb6-42ed-bab0-61176a915b0d-kube-api-access-x7mgg\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503492 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-combined-ca-bundle\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503539 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-combined-ca-bundle\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.503596 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgqv2\" (UniqueName: \"kubernetes.io/projected/e5be9599-5fcb-42cf-b799-e7264965a04c-kube-api-access-dgqv2\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.551190 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.615316 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.615756 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-combined-ca-bundle\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.616122 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-combined-ca-bundle\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.616965 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgqv2\" (UniqueName: \"kubernetes.io/projected/e5be9599-5fcb-42cf-b799-e7264965a04c-kube-api-access-dgqv2\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.617842 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data-custom\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.617951 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618028 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451cbfe0-adb6-42ed-bab0-61176a915b0d-logs\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618096 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data-custom\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618171 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be9599-5fcb-42cf-b799-e7264965a04c-logs\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618193 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618234 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618304 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618377 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618440 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f69hz\" (UniqueName: \"kubernetes.io/projected/6ea74e1b-d321-4b71-9e84-55af4ab109af-kube-api-access-f69hz\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618521 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7mgg\" (UniqueName: \"kubernetes.io/projected/451cbfe0-adb6-42ed-bab0-61176a915b0d-kube-api-access-x7mgg\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618578 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-svc\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.618631 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-config\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.645396 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be9599-5fcb-42cf-b799-e7264965a04c-logs\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.649282 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451cbfe0-adb6-42ed-bab0-61176a915b0d-logs\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.651005 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.658414 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgqv2\" (UniqueName: \"kubernetes.io/projected/e5be9599-5fcb-42cf-b799-e7264965a04c-kube-api-access-dgqv2\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.661500 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.662028 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-combined-ca-bundle\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.662343 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data-custom\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.662639 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data-custom\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.663432 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.665321 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-combined-ca-bundle\") pod \"barbican-keystone-listener-6d986fcbd8-tx2x5\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.681392 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7mgg\" (UniqueName: \"kubernetes.io/projected/451cbfe0-adb6-42ed-bab0-61176a915b0d-kube-api-access-x7mgg\") pod \"barbican-worker-6865b95f65-8tlsw\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.720071 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-574599c959-m6zm8"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.722143 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.723578 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.723809 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.723908 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f69hz\" (UniqueName: \"kubernetes.io/projected/6ea74e1b-d321-4b71-9e84-55af4ab109af-kube-api-access-f69hz\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.724005 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-svc\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.724079 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-config\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.724160 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.724601 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.725058 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.741866 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-574599c959-m6zm8"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.747870 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.750442 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-config\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.753636 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-86d9658dfd-jm4bc"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.758705 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-svc\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.763565 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.767643 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f69hz\" (UniqueName: \"kubernetes.io/projected/6ea74e1b-d321-4b71-9e84-55af4ab109af-kube-api-access-f69hz\") pod \"dnsmasq-dns-85ff748b95-p5sw7\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.802616 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86d9658dfd-jm4bc"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.826239 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-config-data\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.826274 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-combined-ca-bundle\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.826455 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-config-data-custom\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.826695 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ee0b09b-784e-4ba5-bfb3-4067ec822943-logs\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.826881 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6vq9\" (UniqueName: \"kubernetes.io/projected/7ee0b09b-784e-4ba5-bfb3-4067ec822943-kube-api-access-p6vq9\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.827522 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c97479c48-gl8bw"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.829237 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.833691 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.844433 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c97479c48-gl8bw"] Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.865294 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.910347 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937016 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6vq9\" (UniqueName: \"kubernetes.io/projected/7ee0b09b-784e-4ba5-bfb3-4067ec822943-kube-api-access-p6vq9\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937067 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e72587e-7f6f-433e-a493-41d33cb99182-logs\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937104 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-config-data-custom\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937138 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data-custom\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937300 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-logs\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937335 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937390 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-config-data\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937441 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-combined-ca-bundle\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937568 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-config-data\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937643 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-config-data-custom\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937689 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-combined-ca-bundle\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937798 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf686\" (UniqueName: \"kubernetes.io/projected/1e72587e-7f6f-433e-a493-41d33cb99182-kube-api-access-lf686\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937909 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkc6b\" (UniqueName: \"kubernetes.io/projected/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-kube-api-access-jkc6b\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.937998 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ee0b09b-784e-4ba5-bfb3-4067ec822943-logs\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.938035 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-combined-ca-bundle\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.942188 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ee0b09b-784e-4ba5-bfb3-4067ec822943-logs\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.942343 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-config-data-custom\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.945281 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-config-data\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.952620 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ee0b09b-784e-4ba5-bfb3-4067ec822943-combined-ca-bundle\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.953062 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6vq9\" (UniqueName: \"kubernetes.io/projected/7ee0b09b-784e-4ba5-bfb3-4067ec822943-kube-api-access-p6vq9\") pod \"barbican-worker-574599c959-m6zm8\" (UID: \"7ee0b09b-784e-4ba5-bfb3-4067ec822943\") " pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:12 crc kubenswrapper[4748]: I0216 15:14:12.953108 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.026689 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f13e2e6e-7e41-4da9-868f-94d41efe273e" path="/var/lib/kubelet/pods/f13e2e6e-7e41-4da9-868f-94d41efe273e/volumes" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.040738 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-config-data-custom\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.040813 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data-custom\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.040932 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-logs\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.040971 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.041100 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-config-data\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.041137 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-combined-ca-bundle\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.041189 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf686\" (UniqueName: \"kubernetes.io/projected/1e72587e-7f6f-433e-a493-41d33cb99182-kube-api-access-lf686\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.041658 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkc6b\" (UniqueName: \"kubernetes.io/projected/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-kube-api-access-jkc6b\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.041748 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-combined-ca-bundle\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.041786 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e72587e-7f6f-433e-a493-41d33cb99182-logs\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.042208 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e72587e-7f6f-433e-a493-41d33cb99182-logs\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.044082 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-logs\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.046465 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-config-data-custom\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.047234 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-config-data\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.055925 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.061238 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-combined-ca-bundle\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.063381 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data-custom\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.069002 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e72587e-7f6f-433e-a493-41d33cb99182-combined-ca-bundle\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.069305 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf686\" (UniqueName: \"kubernetes.io/projected/1e72587e-7f6f-433e-a493-41d33cb99182-kube-api-access-lf686\") pod \"barbican-keystone-listener-86d9658dfd-jm4bc\" (UID: \"1e72587e-7f6f-433e-a493-41d33cb99182\") " pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.080858 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkc6b\" (UniqueName: \"kubernetes.io/projected/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-kube-api-access-jkc6b\") pod \"barbican-api-6c97479c48-gl8bw\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.127862 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-574599c959-m6zm8" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.143396 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.151935 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.159739 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.159777 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.190307 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69b5b5cc64-hzmw7"] Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.383601 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6d986fcbd8-tx2x5"] Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.603156 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-p5sw7"] Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.765376 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6865b95f65-8tlsw"] Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.916792 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c97479c48-gl8bw"] Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.924394 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-86d9658dfd-jm4bc"] Feb 16 15:14:13 crc kubenswrapper[4748]: I0216 15:14:13.938673 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-574599c959-m6zm8"] Feb 16 15:14:14 crc kubenswrapper[4748]: E0216 15:14:14.119638 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:14:14 crc kubenswrapper[4748]: E0216 15:14:14.120043 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:14:14 crc kubenswrapper[4748]: E0216 15:14:14.120230 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:14:14 crc kubenswrapper[4748]: E0216 15:14:14.121664 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.231876 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6865b95f65-8tlsw" event={"ID":"451cbfe0-adb6-42ed-bab0-61176a915b0d","Type":"ContainerStarted","Data":"41c932aab153b4e52b2c8bb4ad0dd047df3440631ee9ed6e0d924f2ebd391e4f"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.233526 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-574599c959-m6zm8" event={"ID":"7ee0b09b-784e-4ba5-bfb3-4067ec822943","Type":"ContainerStarted","Data":"345f9bdcb7539865df4986050e06e31e220e784165f7bf3e95698c24ed7921f3"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.234710 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69b5b5cc64-hzmw7" event={"ID":"8bbee52e-c08f-417f-a7e9-d7c055c695e7","Type":"ContainerStarted","Data":"ad79091d55f5b08469dc43636c8bbbd781c92ee4209b59c7b30e6cbae40e79a4"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.234737 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69b5b5cc64-hzmw7" event={"ID":"8bbee52e-c08f-417f-a7e9-d7c055c695e7","Type":"ContainerStarted","Data":"e2f0f6f0285908f991a603584ec1f2515098381fe114076002c57884126abd41"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.234981 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.246870 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" event={"ID":"e5be9599-5fcb-42cf-b799-e7264965a04c","Type":"ContainerStarted","Data":"5123af9ee88b0e3047b705638fb5a833e1716136ec88afa4991b864e6be2ba61"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.258668 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69b5b5cc64-hzmw7" podStartSLOduration=2.258648704 podStartE2EDuration="2.258648704s" podCreationTimestamp="2026-02-16 15:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:14.253724343 +0000 UTC m=+1279.945393372" watchObservedRunningTime="2026-02-16 15:14:14.258648704 +0000 UTC m=+1279.950317743" Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.268080 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" event={"ID":"1e72587e-7f6f-433e-a493-41d33cb99182","Type":"ContainerStarted","Data":"b5362a97f517c749ed60d91572121181a03644f2ee2cf92b938a3ddde94ba43a"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.274169 4748 generic.go:334] "Generic (PLEG): container finished" podID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerID="8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac" exitCode=0 Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.274239 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" event={"ID":"6ea74e1b-d321-4b71-9e84-55af4ab109af","Type":"ContainerDied","Data":"8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.274269 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" event={"ID":"6ea74e1b-d321-4b71-9e84-55af4ab109af","Type":"ContainerStarted","Data":"c807894d1d9a42eb024f144614e204e2404031d320fb6378b37f8882f5346358"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.285706 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c97479c48-gl8bw" event={"ID":"0c0f602a-8a4c-4ad7-bac0-fc36623452d4","Type":"ContainerStarted","Data":"6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.285849 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c97479c48-gl8bw" event={"ID":"0c0f602a-8a4c-4ad7-bac0-fc36623452d4","Type":"ContainerStarted","Data":"5633e87e85f49c753fb73f65c9b9839b5af1874489eccff2f043d870f692b19b"} Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.286140 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:14 crc kubenswrapper[4748]: I0216 15:14:14.286169 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.209742 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.329863 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c97479c48-gl8bw" event={"ID":"0c0f602a-8a4c-4ad7-bac0-fc36623452d4","Type":"ContainerStarted","Data":"0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a"} Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.329914 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.333595 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" event={"ID":"6ea74e1b-d321-4b71-9e84-55af4ab109af","Type":"ContainerStarted","Data":"e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34"} Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.333716 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.333752 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.334041 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.375997 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c97479c48-gl8bw" podStartSLOduration=3.375975426 podStartE2EDuration="3.375975426s" podCreationTimestamp="2026-02-16 15:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:15.372506171 +0000 UTC m=+1281.064175210" watchObservedRunningTime="2026-02-16 15:14:15.375975426 +0000 UTC m=+1281.067644465" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.421724 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" podStartSLOduration=3.421703696 podStartE2EDuration="3.421703696s" podCreationTimestamp="2026-02-16 15:14:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:15.418263812 +0000 UTC m=+1281.109932841" watchObservedRunningTime="2026-02-16 15:14:15.421703696 +0000 UTC m=+1281.113372735" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.793099 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-79ffc5c478-xcg56"] Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.795023 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.798751 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.799104 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.807619 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79ffc5c478-xcg56"] Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.850984 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/393346bc-972a-4a9f-847b-9bd0562093f7-logs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.851295 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-combined-ca-bundle\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.851333 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-config-data\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.851485 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-public-tls-certs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.852039 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-internal-tls-certs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.852079 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-config-data-custom\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.852122 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph88x\" (UniqueName: \"kubernetes.io/projected/393346bc-972a-4a9f-847b-9bd0562093f7-kube-api-access-ph88x\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.954125 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph88x\" (UniqueName: \"kubernetes.io/projected/393346bc-972a-4a9f-847b-9bd0562093f7-kube-api-access-ph88x\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.954251 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/393346bc-972a-4a9f-847b-9bd0562093f7-logs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.954320 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-combined-ca-bundle\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.954337 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-config-data\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.954400 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-public-tls-certs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.954453 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-internal-tls-certs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.954481 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-config-data-custom\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.956693 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/393346bc-972a-4a9f-847b-9bd0562093f7-logs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.964027 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-public-tls-certs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.965235 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-config-data\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.966464 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-combined-ca-bundle\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.975899 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-internal-tls-certs\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.980940 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph88x\" (UniqueName: \"kubernetes.io/projected/393346bc-972a-4a9f-847b-9bd0562093f7-kube-api-access-ph88x\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:15 crc kubenswrapper[4748]: I0216 15:14:15.989985 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/393346bc-972a-4a9f-847b-9bd0562093f7-config-data-custom\") pod \"barbican-api-79ffc5c478-xcg56\" (UID: \"393346bc-972a-4a9f-847b-9bd0562093f7\") " pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:16 crc kubenswrapper[4748]: I0216 15:14:16.046841 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:14:16 crc kubenswrapper[4748]: I0216 15:14:16.104547 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:14:16 crc kubenswrapper[4748]: I0216 15:14:16.151819 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:16 crc kubenswrapper[4748]: I0216 15:14:16.345318 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:17 crc kubenswrapper[4748]: I0216 15:14:17.301677 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-79ffc5c478-xcg56"] Feb 16 15:14:17 crc kubenswrapper[4748]: I0216 15:14:17.371592 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-574599c959-m6zm8" event={"ID":"7ee0b09b-784e-4ba5-bfb3-4067ec822943","Type":"ContainerStarted","Data":"95d371901cded865a5f65c99b9aff0b5bbc3d47b412fe471bde66b45cf7c9564"} Feb 16 15:14:17 crc kubenswrapper[4748]: I0216 15:14:17.383750 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79ffc5c478-xcg56" event={"ID":"393346bc-972a-4a9f-847b-9bd0562093f7","Type":"ContainerStarted","Data":"f4f12f6062ec349bb953311cb6403c84ee9d6be7de00eeae602b287e6b69f4cc"} Feb 16 15:14:17 crc kubenswrapper[4748]: I0216 15:14:17.387058 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" event={"ID":"e5be9599-5fcb-42cf-b799-e7264965a04c","Type":"ContainerStarted","Data":"8544941b40466b87475c7eed3d5428fca4ac962c39086a29931976b6be782a25"} Feb 16 15:14:17 crc kubenswrapper[4748]: I0216 15:14:17.393634 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" event={"ID":"1e72587e-7f6f-433e-a493-41d33cb99182","Type":"ContainerStarted","Data":"acfb70004110adfa704bc3e703dd2b7904bd94f13ec952939e9758b2404042cb"} Feb 16 15:14:17 crc kubenswrapper[4748]: I0216 15:14:17.401812 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6865b95f65-8tlsw" event={"ID":"451cbfe0-adb6-42ed-bab0-61176a915b0d","Type":"ContainerStarted","Data":"b768dc464111ba611d2fc17a4fa404b1bd59c8ce1709ee63e123e57e6ace4786"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.435581 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6865b95f65-8tlsw" event={"ID":"451cbfe0-adb6-42ed-bab0-61176a915b0d","Type":"ContainerStarted","Data":"32454f86f55fabd7224954a4d406dc8725e57c2d19e6859e33ae5a807541a6b5"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.437913 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" event={"ID":"e5be9599-5fcb-42cf-b799-e7264965a04c","Type":"ContainerStarted","Data":"4043295356c734a308085f1460cd6a261ca9e89204e7ed1e0b833d7014a19599"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.448403 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" event={"ID":"1e72587e-7f6f-433e-a493-41d33cb99182","Type":"ContainerStarted","Data":"6d70ad0cbe434b89fd3a378e1badb0b70862d19c50693100a5b33e89c794485c"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.454786 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6865b95f65-8tlsw" podStartSLOduration=3.418024786 podStartE2EDuration="6.454766789s" podCreationTimestamp="2026-02-16 15:14:12 +0000 UTC" firstStartedPulling="2026-02-16 15:14:13.799354562 +0000 UTC m=+1279.491023591" lastFinishedPulling="2026-02-16 15:14:16.836096555 +0000 UTC m=+1282.527765594" observedRunningTime="2026-02-16 15:14:18.450746241 +0000 UTC m=+1284.142415280" watchObservedRunningTime="2026-02-16 15:14:18.454766789 +0000 UTC m=+1284.146435828" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.462304 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-574599c959-m6zm8" event={"ID":"7ee0b09b-784e-4ba5-bfb3-4067ec822943","Type":"ContainerStarted","Data":"e120b47c7e0e17b4a5891f27386fca1c17b5f5bfae996763361ed741905448dc"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.468357 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79ffc5c478-xcg56" event={"ID":"393346bc-972a-4a9f-847b-9bd0562093f7","Type":"ContainerStarted","Data":"85833bd60748af4b19fcd7471e4de48a4f9a07d0b3306e7799a9994add75d63b"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.468405 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-79ffc5c478-xcg56" event={"ID":"393346bc-972a-4a9f-847b-9bd0562093f7","Type":"ContainerStarted","Data":"2a4c33654d23232a50a3f874ea7eedbce346c2ae479e316b5c10e73b6ef01223"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.468436 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.468457 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.473838 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-86d9658dfd-jm4bc" podStartSLOduration=3.627320584 podStartE2EDuration="6.473815876s" podCreationTimestamp="2026-02-16 15:14:12 +0000 UTC" firstStartedPulling="2026-02-16 15:14:13.975257922 +0000 UTC m=+1279.666926961" lastFinishedPulling="2026-02-16 15:14:16.821753224 +0000 UTC m=+1282.513422253" observedRunningTime="2026-02-16 15:14:18.47315624 +0000 UTC m=+1284.164825279" watchObservedRunningTime="2026-02-16 15:14:18.473815876 +0000 UTC m=+1284.165484935" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.479570 4748 generic.go:334] "Generic (PLEG): container finished" podID="59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" containerID="0e833dcb4f3a0ebaaab025f6b0c1ad587e826c822f10215771faf1f3b4550cc6" exitCode=0 Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.479629 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5n8n8" event={"ID":"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71","Type":"ContainerDied","Data":"0e833dcb4f3a0ebaaab025f6b0c1ad587e826c822f10215771faf1f3b4550cc6"} Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.519471 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6d986fcbd8-tx2x5"] Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.542632 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" podStartSLOduration=3.110936943 podStartE2EDuration="6.542610591s" podCreationTimestamp="2026-02-16 15:14:12 +0000 UTC" firstStartedPulling="2026-02-16 15:14:13.409958153 +0000 UTC m=+1279.101627192" lastFinishedPulling="2026-02-16 15:14:16.841631801 +0000 UTC m=+1282.533300840" observedRunningTime="2026-02-16 15:14:18.525938673 +0000 UTC m=+1284.217607712" watchObservedRunningTime="2026-02-16 15:14:18.542610591 +0000 UTC m=+1284.234279630" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.577755 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-79ffc5c478-xcg56" podStartSLOduration=3.577726021 podStartE2EDuration="3.577726021s" podCreationTimestamp="2026-02-16 15:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:18.569277454 +0000 UTC m=+1284.260946493" watchObservedRunningTime="2026-02-16 15:14:18.577726021 +0000 UTC m=+1284.269395060" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.603435 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.624976 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-574599c959-m6zm8" podStartSLOduration=3.760090656 podStartE2EDuration="6.624948068s" podCreationTimestamp="2026-02-16 15:14:12 +0000 UTC" firstStartedPulling="2026-02-16 15:14:13.976518442 +0000 UTC m=+1279.668187481" lastFinishedPulling="2026-02-16 15:14:16.841375854 +0000 UTC m=+1282.533044893" observedRunningTime="2026-02-16 15:14:18.590425072 +0000 UTC m=+1284.282094121" watchObservedRunningTime="2026-02-16 15:14:18.624948068 +0000 UTC m=+1284.316617107" Feb 16 15:14:18 crc kubenswrapper[4748]: I0216 15:14:18.677362 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6865b95f65-8tlsw"] Feb 16 15:14:20 crc kubenswrapper[4748]: I0216 15:14:20.497564 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6865b95f65-8tlsw" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker-log" containerID="cri-o://b768dc464111ba611d2fc17a4fa404b1bd59c8ce1709ee63e123e57e6ace4786" gracePeriod=30 Feb 16 15:14:20 crc kubenswrapper[4748]: I0216 15:14:20.497650 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-6865b95f65-8tlsw" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker" containerID="cri-o://32454f86f55fabd7224954a4d406dc8725e57c2d19e6859e33ae5a807541a6b5" gracePeriod=30 Feb 16 15:14:20 crc kubenswrapper[4748]: I0216 15:14:20.497689 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener" containerID="cri-o://4043295356c734a308085f1460cd6a261ca9e89204e7ed1e0b833d7014a19599" gracePeriod=30 Feb 16 15:14:20 crc kubenswrapper[4748]: I0216 15:14:20.497686 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener-log" containerID="cri-o://8544941b40466b87475c7eed3d5428fca4ac962c39086a29931976b6be782a25" gracePeriod=30 Feb 16 15:14:21 crc kubenswrapper[4748]: I0216 15:14:21.508111 4748 generic.go:334] "Generic (PLEG): container finished" podID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerID="8544941b40466b87475c7eed3d5428fca4ac962c39086a29931976b6be782a25" exitCode=143 Feb 16 15:14:21 crc kubenswrapper[4748]: I0216 15:14:21.508187 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" event={"ID":"e5be9599-5fcb-42cf-b799-e7264965a04c","Type":"ContainerDied","Data":"8544941b40466b87475c7eed3d5428fca4ac962c39086a29931976b6be782a25"} Feb 16 15:14:21 crc kubenswrapper[4748]: I0216 15:14:21.510739 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6865b95f65-8tlsw" event={"ID":"451cbfe0-adb6-42ed-bab0-61176a915b0d","Type":"ContainerDied","Data":"32454f86f55fabd7224954a4d406dc8725e57c2d19e6859e33ae5a807541a6b5"} Feb 16 15:14:21 crc kubenswrapper[4748]: I0216 15:14:21.510705 4748 generic.go:334] "Generic (PLEG): container finished" podID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerID="32454f86f55fabd7224954a4d406dc8725e57c2d19e6859e33ae5a807541a6b5" exitCode=0 Feb 16 15:14:21 crc kubenswrapper[4748]: I0216 15:14:21.510787 4748 generic.go:334] "Generic (PLEG): container finished" podID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerID="b768dc464111ba611d2fc17a4fa404b1bd59c8ce1709ee63e123e57e6ace4786" exitCode=143 Feb 16 15:14:21 crc kubenswrapper[4748]: I0216 15:14:21.510810 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6865b95f65-8tlsw" event={"ID":"451cbfe0-adb6-42ed-bab0-61176a915b0d","Type":"ContainerDied","Data":"b768dc464111ba611d2fc17a4fa404b1bd59c8ce1709ee63e123e57e6ace4786"} Feb 16 15:14:22 crc kubenswrapper[4748]: I0216 15:14:22.521881 4748 generic.go:334] "Generic (PLEG): container finished" podID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerID="4043295356c734a308085f1460cd6a261ca9e89204e7ed1e0b833d7014a19599" exitCode=0 Feb 16 15:14:22 crc kubenswrapper[4748]: I0216 15:14:22.522117 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" event={"ID":"e5be9599-5fcb-42cf-b799-e7264965a04c","Type":"ContainerDied","Data":"4043295356c734a308085f1460cd6a261ca9e89204e7ed1e0b833d7014a19599"} Feb 16 15:14:22 crc kubenswrapper[4748]: I0216 15:14:22.913092 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:22 crc kubenswrapper[4748]: I0216 15:14:22.990795 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hr4l4"] Feb 16 15:14:22 crc kubenswrapper[4748]: I0216 15:14:22.991232 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" podUID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerName="dnsmasq-dns" containerID="cri-o://d78350a2c9f6852a7063330bfff32d91286abf8f41aa97600cb58a3b35561a3d" gracePeriod=10 Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.410245 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.421639 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.539005 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-5n8n8" event={"ID":"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71","Type":"ContainerDied","Data":"eafa8fc9f7393633f46a1fb4d1508c80f2df669459430b255a4ce1ad944ae1e5"} Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.539046 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eafa8fc9f7393633f46a1fb4d1508c80f2df669459430b255a4ce1ad944ae1e5" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.539145 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-5n8n8" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.543457 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6865b95f65-8tlsw" event={"ID":"451cbfe0-adb6-42ed-bab0-61176a915b0d","Type":"ContainerDied","Data":"41c932aab153b4e52b2c8bb4ad0dd047df3440631ee9ed6e0d924f2ebd391e4f"} Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.543498 4748 scope.go:117] "RemoveContainer" containerID="32454f86f55fabd7224954a4d406dc8725e57c2d19e6859e33ae5a807541a6b5" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.543613 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6865b95f65-8tlsw" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.548691 4748 generic.go:334] "Generic (PLEG): container finished" podID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerID="d78350a2c9f6852a7063330bfff32d91286abf8f41aa97600cb58a3b35561a3d" exitCode=0 Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.548734 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" event={"ID":"84c2e211-48cf-4dc8-8036-8dec6a355951","Type":"ContainerDied","Data":"d78350a2c9f6852a7063330bfff32d91286abf8f41aa97600cb58a3b35561a3d"} Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.551473 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557137 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data-custom\") pod \"451cbfe0-adb6-42ed-bab0-61176a915b0d\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557226 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data\") pod \"451cbfe0-adb6-42ed-bab0-61176a915b0d\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557314 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-scripts\") pod \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557338 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-combined-ca-bundle\") pod \"451cbfe0-adb6-42ed-bab0-61176a915b0d\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557413 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-combined-ca-bundle\") pod \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557459 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-config-data\") pod \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557495 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-etc-machine-id\") pod \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557515 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-db-sync-config-data\") pod \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557533 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451cbfe0-adb6-42ed-bab0-61176a915b0d-logs\") pod \"451cbfe0-adb6-42ed-bab0-61176a915b0d\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557557 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45dh\" (UniqueName: \"kubernetes.io/projected/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-kube-api-access-d45dh\") pod \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\" (UID: \"59c88a82-b5c7-43aa-b216-2fe7bcc6dd71\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.557622 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7mgg\" (UniqueName: \"kubernetes.io/projected/451cbfe0-adb6-42ed-bab0-61176a915b0d-kube-api-access-x7mgg\") pod \"451cbfe0-adb6-42ed-bab0-61176a915b0d\" (UID: \"451cbfe0-adb6-42ed-bab0-61176a915b0d\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.566505 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/451cbfe0-adb6-42ed-bab0-61176a915b0d-logs" (OuterVolumeSpecName: "logs") pod "451cbfe0-adb6-42ed-bab0-61176a915b0d" (UID: "451cbfe0-adb6-42ed-bab0-61176a915b0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.566649 4748 scope.go:117] "RemoveContainer" containerID="b768dc464111ba611d2fc17a4fa404b1bd59c8ce1709ee63e123e57e6ace4786" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.570545 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" (UID: "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.570701 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/451cbfe0-adb6-42ed-bab0-61176a915b0d-kube-api-access-x7mgg" (OuterVolumeSpecName: "kube-api-access-x7mgg") pod "451cbfe0-adb6-42ed-bab0-61176a915b0d" (UID: "451cbfe0-adb6-42ed-bab0-61176a915b0d"). InnerVolumeSpecName "kube-api-access-x7mgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.592295 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-kube-api-access-d45dh" (OuterVolumeSpecName: "kube-api-access-d45dh") pod "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" (UID: "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71"). InnerVolumeSpecName "kube-api-access-d45dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.600564 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-scripts" (OuterVolumeSpecName: "scripts") pod "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" (UID: "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.601198 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "451cbfe0-adb6-42ed-bab0-61176a915b0d" (UID: "451cbfe0-adb6-42ed-bab0-61176a915b0d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.607263 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" (UID: "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.610882 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" (UID: "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.612486 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "451cbfe0-adb6-42ed-bab0-61176a915b0d" (UID: "451cbfe0-adb6-42ed-bab0-61176a915b0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.652676 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data" (OuterVolumeSpecName: "config-data") pod "451cbfe0-adb6-42ed-bab0-61176a915b0d" (UID: "451cbfe0-adb6-42ed-bab0-61176a915b0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.653947 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.662293 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data-custom\") pod \"e5be9599-5fcb-42cf-b799-e7264965a04c\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.662347 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-combined-ca-bundle\") pod \"e5be9599-5fcb-42cf-b799-e7264965a04c\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.662369 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgqv2\" (UniqueName: \"kubernetes.io/projected/e5be9599-5fcb-42cf-b799-e7264965a04c-kube-api-access-dgqv2\") pod \"e5be9599-5fcb-42cf-b799-e7264965a04c\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.662439 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data\") pod \"e5be9599-5fcb-42cf-b799-e7264965a04c\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.662633 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be9599-5fcb-42cf-b799-e7264965a04c-logs\") pod \"e5be9599-5fcb-42cf-b799-e7264965a04c\" (UID: \"e5be9599-5fcb-42cf-b799-e7264965a04c\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.673924 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5be9599-5fcb-42cf-b799-e7264965a04c-kube-api-access-dgqv2" (OuterVolumeSpecName: "kube-api-access-dgqv2") pod "e5be9599-5fcb-42cf-b799-e7264965a04c" (UID: "e5be9599-5fcb-42cf-b799-e7264965a04c"). InnerVolumeSpecName "kube-api-access-dgqv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674003 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e5be9599-5fcb-42cf-b799-e7264965a04c-logs" (OuterVolumeSpecName: "logs") pod "e5be9599-5fcb-42cf-b799-e7264965a04c" (UID: "e5be9599-5fcb-42cf-b799-e7264965a04c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674780 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674794 4748 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674804 4748 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674813 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/451cbfe0-adb6-42ed-bab0-61176a915b0d-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674821 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d45dh\" (UniqueName: \"kubernetes.io/projected/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-kube-api-access-d45dh\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674832 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e5be9599-5fcb-42cf-b799-e7264965a04c-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674841 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7mgg\" (UniqueName: \"kubernetes.io/projected/451cbfe0-adb6-42ed-bab0-61176a915b0d-kube-api-access-x7mgg\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674850 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgqv2\" (UniqueName: \"kubernetes.io/projected/e5be9599-5fcb-42cf-b799-e7264965a04c-kube-api-access-dgqv2\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674858 4748 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674866 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674873 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.674881 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/451cbfe0-adb6-42ed-bab0-61176a915b0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.684654 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e5be9599-5fcb-42cf-b799-e7264965a04c" (UID: "e5be9599-5fcb-42cf-b799-e7264965a04c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.692241 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-config-data" (OuterVolumeSpecName: "config-data") pod "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" (UID: "59c88a82-b5c7-43aa-b216-2fe7bcc6dd71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.719916 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e5be9599-5fcb-42cf-b799-e7264965a04c" (UID: "e5be9599-5fcb-42cf-b799-e7264965a04c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.760857 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data" (OuterVolumeSpecName: "config-data") pod "e5be9599-5fcb-42cf-b799-e7264965a04c" (UID: "e5be9599-5fcb-42cf-b799-e7264965a04c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.777359 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-sb\") pod \"84c2e211-48cf-4dc8-8036-8dec6a355951\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.777413 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-nb\") pod \"84c2e211-48cf-4dc8-8036-8dec6a355951\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.777491 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-config\") pod \"84c2e211-48cf-4dc8-8036-8dec6a355951\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.777560 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-svc\") pod \"84c2e211-48cf-4dc8-8036-8dec6a355951\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.777632 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-swift-storage-0\") pod \"84c2e211-48cf-4dc8-8036-8dec6a355951\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.777666 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brd72\" (UniqueName: \"kubernetes.io/projected/84c2e211-48cf-4dc8-8036-8dec6a355951-kube-api-access-brd72\") pod \"84c2e211-48cf-4dc8-8036-8dec6a355951\" (UID: \"84c2e211-48cf-4dc8-8036-8dec6a355951\") " Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.778185 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.778201 4748 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.778211 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.778220 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e5be9599-5fcb-42cf-b799-e7264965a04c-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.795907 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84c2e211-48cf-4dc8-8036-8dec6a355951-kube-api-access-brd72" (OuterVolumeSpecName: "kube-api-access-brd72") pod "84c2e211-48cf-4dc8-8036-8dec6a355951" (UID: "84c2e211-48cf-4dc8-8036-8dec6a355951"). InnerVolumeSpecName "kube-api-access-brd72". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.880358 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brd72\" (UniqueName: \"kubernetes.io/projected/84c2e211-48cf-4dc8-8036-8dec6a355951-kube-api-access-brd72\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.894526 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "84c2e211-48cf-4dc8-8036-8dec6a355951" (UID: "84c2e211-48cf-4dc8-8036-8dec6a355951"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.900515 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-config" (OuterVolumeSpecName: "config") pod "84c2e211-48cf-4dc8-8036-8dec6a355951" (UID: "84c2e211-48cf-4dc8-8036-8dec6a355951"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.904067 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "84c2e211-48cf-4dc8-8036-8dec6a355951" (UID: "84c2e211-48cf-4dc8-8036-8dec6a355951"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.906153 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "84c2e211-48cf-4dc8-8036-8dec6a355951" (UID: "84c2e211-48cf-4dc8-8036-8dec6a355951"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.912793 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-6865b95f65-8tlsw"] Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.919180 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "84c2e211-48cf-4dc8-8036-8dec6a355951" (UID: "84c2e211-48cf-4dc8-8036-8dec6a355951"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.921660 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-6865b95f65-8tlsw"] Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.982202 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.982255 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.982269 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.982282 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:23 crc kubenswrapper[4748]: I0216 15:14:23.982295 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/84c2e211-48cf-4dc8-8036-8dec6a355951-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.562686 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerStarted","Data":"f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff"} Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.562915 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-central-agent" containerID="cri-o://543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828" gracePeriod=30 Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.562971 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="sg-core" containerID="cri-o://407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34" gracePeriod=30 Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.563241 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.563014 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-notification-agent" containerID="cri-o://0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd" gracePeriod=30 Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.562972 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="proxy-httpd" containerID="cri-o://f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff" gracePeriod=30 Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.568503 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" event={"ID":"84c2e211-48cf-4dc8-8036-8dec6a355951","Type":"ContainerDied","Data":"34b4ecae7f53aad88827b1030e4ee6096c9333a820a208da1a6cd1a05c991898"} Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.568564 4748 scope.go:117] "RemoveContainer" containerID="d78350a2c9f6852a7063330bfff32d91286abf8f41aa97600cb58a3b35561a3d" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.568737 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-hr4l4" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.575238 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.575464 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6d986fcbd8-tx2x5" event={"ID":"e5be9599-5fcb-42cf-b799-e7264965a04c","Type":"ContainerDied","Data":"5123af9ee88b0e3047b705638fb5a833e1716136ec88afa4991b864e6be2ba61"} Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.602113 4748 scope.go:117] "RemoveContainer" containerID="529f11fe66be21b6ad13047387e29888f27a83ddb000433c36e13f0dc298583f" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.611895 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.025271443 podStartE2EDuration="56.611880363s" podCreationTimestamp="2026-02-16 15:13:28 +0000 UTC" firstStartedPulling="2026-02-16 15:13:31.593299624 +0000 UTC m=+1237.284968663" lastFinishedPulling="2026-02-16 15:14:23.179908544 +0000 UTC m=+1288.871577583" observedRunningTime="2026-02-16 15:14:24.610425878 +0000 UTC m=+1290.302094917" watchObservedRunningTime="2026-02-16 15:14:24.611880363 +0000 UTC m=+1290.303549402" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.626291 4748 scope.go:117] "RemoveContainer" containerID="4043295356c734a308085f1460cd6a261ca9e89204e7ed1e0b833d7014a19599" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.671782 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hr4l4"] Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.708946 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-hr4l4"] Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.723981 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-6d986fcbd8-tx2x5"] Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.735868 4748 scope.go:117] "RemoveContainer" containerID="8544941b40466b87475c7eed3d5428fca4ac962c39086a29931976b6be782a25" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.860668 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-6d986fcbd8-tx2x5"] Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.926781 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:24 crc kubenswrapper[4748]: E0216 15:14:24.927250 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927268 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker" Feb 16 15:14:24 crc kubenswrapper[4748]: E0216 15:14:24.927278 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerName="dnsmasq-dns" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927285 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerName="dnsmasq-dns" Feb 16 15:14:24 crc kubenswrapper[4748]: E0216 15:14:24.927295 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker-log" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927301 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker-log" Feb 16 15:14:24 crc kubenswrapper[4748]: E0216 15:14:24.927321 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerName="init" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927327 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerName="init" Feb 16 15:14:24 crc kubenswrapper[4748]: E0216 15:14:24.927336 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener-log" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927341 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener-log" Feb 16 15:14:24 crc kubenswrapper[4748]: E0216 15:14:24.927361 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" containerName="cinder-db-sync" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927366 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" containerName="cinder-db-sync" Feb 16 15:14:24 crc kubenswrapper[4748]: E0216 15:14:24.927378 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927383 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927582 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" containerName="cinder-db-sync" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927595 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927606 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" containerName="barbican-worker-log" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927617 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="84c2e211-48cf-4dc8-8036-8dec6a355951" containerName="dnsmasq-dns" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927634 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.927646 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" containerName="barbican-keystone-listener-log" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.928688 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.934571 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.934793 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.934833 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-v2zdh" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.935465 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 16 15:14:24 crc kubenswrapper[4748]: I0216 15:14:24.959287 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.011916 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="451cbfe0-adb6-42ed-bab0-61176a915b0d" path="/var/lib/kubelet/pods/451cbfe0-adb6-42ed-bab0-61176a915b0d/volumes" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.012527 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84c2e211-48cf-4dc8-8036-8dec6a355951" path="/var/lib/kubelet/pods/84c2e211-48cf-4dc8-8036-8dec6a355951/volumes" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.013157 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5be9599-5fcb-42cf-b799-e7264965a04c" path="/var/lib/kubelet/pods/e5be9599-5fcb-42cf-b799-e7264965a04c/volumes" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.023128 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.023191 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.023244 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.023312 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnzkx\" (UniqueName: \"kubernetes.io/projected/8c30cc99-bbea-493e-8117-b4e418ebd5ef-kube-api-access-bnzkx\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.023337 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c30cc99-bbea-493e-8117-b4e418ebd5ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.023402 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.031473 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-v94vm"] Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.033675 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.047114 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-v94vm"] Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126320 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126384 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-config\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126423 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126473 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8htsc\" (UniqueName: \"kubernetes.io/projected/ff626f18-cf62-4e6d-8659-89370cb65f7f-kube-api-access-8htsc\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126506 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnzkx\" (UniqueName: \"kubernetes.io/projected/8c30cc99-bbea-493e-8117-b4e418ebd5ef-kube-api-access-bnzkx\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126531 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c30cc99-bbea-493e-8117-b4e418ebd5ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126576 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126662 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126706 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126812 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126837 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.126903 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.134178 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c30cc99-bbea-493e-8117-b4e418ebd5ef-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.152260 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.154143 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.154597 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-scripts\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.155262 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.162291 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.169385 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.171594 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.187042 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.209345 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnzkx\" (UniqueName: \"kubernetes.io/projected/8c30cc99-bbea-493e-8117-b4e418ebd5ef-kube-api-access-bnzkx\") pod \"cinder-scheduler-0\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.234949 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235011 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235057 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8htsc\" (UniqueName: \"kubernetes.io/projected/ff626f18-cf62-4e6d-8659-89370cb65f7f-kube-api-access-8htsc\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235087 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-logs\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235120 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235144 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235199 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235229 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235246 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-scripts\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235273 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235322 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235357 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srsp8\" (UniqueName: \"kubernetes.io/projected/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-kube-api-access-srsp8\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.235394 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-config\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.236403 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-config\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.236946 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.238496 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.239059 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.239581 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.263076 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.284625 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8htsc\" (UniqueName: \"kubernetes.io/projected/ff626f18-cf62-4e6d-8659-89370cb65f7f-kube-api-access-8htsc\") pod \"dnsmasq-dns-5c9776ccc5-v94vm\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.336962 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337048 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337073 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-scripts\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337101 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337158 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srsp8\" (UniqueName: \"kubernetes.io/projected/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-kube-api-access-srsp8\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337202 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337233 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-logs\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337590 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-logs\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.337633 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-etc-machine-id\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.344948 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.346345 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data-custom\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.347255 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-scripts\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.353644 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.377312 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srsp8\" (UniqueName: \"kubernetes.io/projected/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-kube-api-access-srsp8\") pod \"cinder-api-0\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.383108 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.625358 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.759642 4748 generic.go:334] "Generic (PLEG): container finished" podID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerID="407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34" exitCode=2 Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.759672 4748 generic.go:334] "Generic (PLEG): container finished" podID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerID="543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828" exitCode=0 Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.759693 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerDied","Data":"407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34"} Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.759742 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerDied","Data":"543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828"} Feb 16 15:14:25 crc kubenswrapper[4748]: I0216 15:14:25.981559 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.050854 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.115528 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-v94vm"] Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.372459 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.435298 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.840156 4748 generic.go:334] "Generic (PLEG): container finished" podID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerID="f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff" exitCode=0 Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.840475 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerDied","Data":"f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff"} Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.841708 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"feec6f3c-c20e-4b41-bd9a-d3e32087acd6","Type":"ContainerStarted","Data":"2d2859b224ee594b270b7c4b66c87c038570aceb1a607e3b06551f6cfce39185"} Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.842732 4748 generic.go:334] "Generic (PLEG): container finished" podID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerID="e503f427cf2edd61b5dd8f9459330d5e971c281b3e2fe00c292b9136fbcf8fa6" exitCode=0 Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.842772 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" event={"ID":"ff626f18-cf62-4e6d-8659-89370cb65f7f","Type":"ContainerDied","Data":"e503f427cf2edd61b5dd8f9459330d5e971c281b3e2fe00c292b9136fbcf8fa6"} Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.842788 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" event={"ID":"ff626f18-cf62-4e6d-8659-89370cb65f7f","Type":"ContainerStarted","Data":"c383e3008826f3e3ef6bf5ded1adfb50dd0d2184ee86726b896788e6e69008c6"} Feb 16 15:14:26 crc kubenswrapper[4748]: I0216 15:14:26.845380 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8c30cc99-bbea-493e-8117-b4e418ebd5ef","Type":"ContainerStarted","Data":"562af5973f05709c153075b0cc0032d9fd6056904e847046a7dc1cb63f04bae3"} Feb 16 15:14:27 crc kubenswrapper[4748]: I0216 15:14:27.858154 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" event={"ID":"ff626f18-cf62-4e6d-8659-89370cb65f7f","Type":"ContainerStarted","Data":"a3abffd4599c49844f84e2b5f01b2024c7775add7e218dae603c91ca42c667d8"} Feb 16 15:14:27 crc kubenswrapper[4748]: I0216 15:14:27.859607 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:27 crc kubenswrapper[4748]: I0216 15:14:27.864204 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"feec6f3c-c20e-4b41-bd9a-d3e32087acd6","Type":"ContainerStarted","Data":"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9"} Feb 16 15:14:27 crc kubenswrapper[4748]: I0216 15:14:27.883114 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" podStartSLOduration=3.8830981209999997 podStartE2EDuration="3.883098121s" podCreationTimestamp="2026-02-16 15:14:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:27.881579993 +0000 UTC m=+1293.573249042" watchObservedRunningTime="2026-02-16 15:14:27.883098121 +0000 UTC m=+1293.574767160" Feb 16 15:14:28 crc kubenswrapper[4748]: E0216 15:14:28.018950 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.040132 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.133922 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.462511 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b9d456665-xjt6m"] Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.463282 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b9d456665-xjt6m" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-api" containerID="cri-o://bed6d02f53dd73b01cb8e00702a6ecbf387f6806345e2ae8294b7b0fc3a8d9af" gracePeriod=30 Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.463896 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b9d456665-xjt6m" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-httpd" containerID="cri-o://4c5d1586b5c082cdfc2a2efc11e6c42eecd94344390310c446a7cf8f5c626ab7" gracePeriod=30 Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.493331 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7b9d456665-xjt6m" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9696/\": EOF" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.518232 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b9448587-thszr"] Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.520036 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.522151 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.533796 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-79ffc5c478-xcg56" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.537136 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b9448587-thszr"] Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.663398 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-config\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.663488 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-combined-ca-bundle\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.663579 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-internal-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.663644 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-httpd-config\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.663670 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-ovndb-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.663702 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2zt6\" (UniqueName: \"kubernetes.io/projected/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-kube-api-access-b2zt6\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.663739 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-public-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.667623 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c97479c48-gl8bw"] Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.676040 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c97479c48-gl8bw" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api-log" containerID="cri-o://6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d" gracePeriod=30 Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.676634 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6c97479c48-gl8bw" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api" containerID="cri-o://0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a" gracePeriod=30 Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.770880 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-config\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.770985 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-combined-ca-bundle\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.771112 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-internal-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.771179 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-httpd-config\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.771211 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-ovndb-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.771252 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2zt6\" (UniqueName: \"kubernetes.io/projected/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-kube-api-access-b2zt6\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.771280 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-public-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.778044 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-config\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.779471 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-public-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.782437 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-combined-ca-bundle\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.783307 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-httpd-config\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.794156 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-ovndb-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.797940 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-internal-tls-certs\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.800337 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2zt6\" (UniqueName: \"kubernetes.io/projected/4da3d24c-5be3-45a4-a282-bbbd33f0dad7-kube-api-access-b2zt6\") pod \"neutron-6b9448587-thszr\" (UID: \"4da3d24c-5be3-45a4-a282-bbbd33f0dad7\") " pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.864487 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.918942 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8c30cc99-bbea-493e-8117-b4e418ebd5ef","Type":"ContainerStarted","Data":"317e09e74bd0ba6c6b9fd3b12c57123c285b8cb03c9af430ba8ef035db9d2af8"} Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.935033 4748 generic.go:334] "Generic (PLEG): container finished" podID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerID="6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d" exitCode=143 Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.935113 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c97479c48-gl8bw" event={"ID":"0c0f602a-8a4c-4ad7-bac0-fc36623452d4","Type":"ContainerDied","Data":"6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d"} Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.985686 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api-log" containerID="cri-o://6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9" gracePeriod=30 Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.987007 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"feec6f3c-c20e-4b41-bd9a-d3e32087acd6","Type":"ContainerStarted","Data":"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a"} Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.987070 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api" containerID="cri-o://81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a" gracePeriod=30 Feb 16 15:14:28 crc kubenswrapper[4748]: I0216 15:14:28.987170 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:14:29 crc kubenswrapper[4748]: I0216 15:14:29.050347 4748 generic.go:334] "Generic (PLEG): container finished" podID="3938329a-9d91-481e-9993-09917f2c7686" containerID="4c5d1586b5c082cdfc2a2efc11e6c42eecd94344390310c446a7cf8f5c626ab7" exitCode=0 Feb 16 15:14:29 crc kubenswrapper[4748]: I0216 15:14:29.068329 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9d456665-xjt6m" event={"ID":"3938329a-9d91-481e-9993-09917f2c7686","Type":"ContainerDied","Data":"4c5d1586b5c082cdfc2a2efc11e6c42eecd94344390310c446a7cf8f5c626ab7"} Feb 16 15:14:29 crc kubenswrapper[4748]: I0216 15:14:29.071361 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.07133758 podStartE2EDuration="4.07133758s" podCreationTimestamp="2026-02-16 15:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:29.045944648 +0000 UTC m=+1294.737613697" watchObservedRunningTime="2026-02-16 15:14:29.07133758 +0000 UTC m=+1294.763006619" Feb 16 15:14:29 crc kubenswrapper[4748]: I0216 15:14:29.644824 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b9448587-thszr"] Feb 16 15:14:29 crc kubenswrapper[4748]: I0216 15:14:29.873729 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:14:29 crc kubenswrapper[4748]: I0216 15:14:29.876327 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.049755 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-scripts\") pod \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.049815 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-config-data\") pod \"d88fff67-d90f-4c7d-bf0d-711c87006f68\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.049886 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-logs\") pod \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.049935 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-scripts\") pod \"d88fff67-d90f-4c7d-bf0d-711c87006f68\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050005 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-run-httpd\") pod \"d88fff67-d90f-4c7d-bf0d-711c87006f68\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050035 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-combined-ca-bundle\") pod \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050075 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-log-httpd\") pod \"d88fff67-d90f-4c7d-bf0d-711c87006f68\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050099 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srsp8\" (UniqueName: \"kubernetes.io/projected/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-kube-api-access-srsp8\") pod \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050126 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data-custom\") pod \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050158 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb42r\" (UniqueName: \"kubernetes.io/projected/d88fff67-d90f-4c7d-bf0d-711c87006f68-kube-api-access-vb42r\") pod \"d88fff67-d90f-4c7d-bf0d-711c87006f68\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050195 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data\") pod \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050235 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-combined-ca-bundle\") pod \"d88fff67-d90f-4c7d-bf0d-711c87006f68\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050397 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-etc-machine-id\") pod \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\" (UID: \"feec6f3c-c20e-4b41-bd9a-d3e32087acd6\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.050491 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-sg-core-conf-yaml\") pod \"d88fff67-d90f-4c7d-bf0d-711c87006f68\" (UID: \"d88fff67-d90f-4c7d-bf0d-711c87006f68\") " Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.051796 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-logs" (OuterVolumeSpecName: "logs") pod "feec6f3c-c20e-4b41-bd9a-d3e32087acd6" (UID: "feec6f3c-c20e-4b41-bd9a-d3e32087acd6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.053783 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-scripts" (OuterVolumeSpecName: "scripts") pod "feec6f3c-c20e-4b41-bd9a-d3e32087acd6" (UID: "feec6f3c-c20e-4b41-bd9a-d3e32087acd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.054164 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d88fff67-d90f-4c7d-bf0d-711c87006f68" (UID: "d88fff67-d90f-4c7d-bf0d-711c87006f68"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.054605 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "feec6f3c-c20e-4b41-bd9a-d3e32087acd6" (UID: "feec6f3c-c20e-4b41-bd9a-d3e32087acd6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.054625 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d88fff67-d90f-4c7d-bf0d-711c87006f68" (UID: "d88fff67-d90f-4c7d-bf0d-711c87006f68"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.068333 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d88fff67-d90f-4c7d-bf0d-711c87006f68-kube-api-access-vb42r" (OuterVolumeSpecName: "kube-api-access-vb42r") pod "d88fff67-d90f-4c7d-bf0d-711c87006f68" (UID: "d88fff67-d90f-4c7d-bf0d-711c87006f68"). InnerVolumeSpecName "kube-api-access-vb42r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.072533 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-kube-api-access-srsp8" (OuterVolumeSpecName: "kube-api-access-srsp8") pod "feec6f3c-c20e-4b41-bd9a-d3e32087acd6" (UID: "feec6f3c-c20e-4b41-bd9a-d3e32087acd6"). InnerVolumeSpecName "kube-api-access-srsp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.072929 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "feec6f3c-c20e-4b41-bd9a-d3e32087acd6" (UID: "feec6f3c-c20e-4b41-bd9a-d3e32087acd6"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.073220 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-scripts" (OuterVolumeSpecName: "scripts") pod "d88fff67-d90f-4c7d-bf0d-711c87006f68" (UID: "d88fff67-d90f-4c7d-bf0d-711c87006f68"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.078374 4748 generic.go:334] "Generic (PLEG): container finished" podID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerID="81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a" exitCode=0 Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.078408 4748 generic.go:334] "Generic (PLEG): container finished" podID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerID="6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9" exitCode=143 Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.078466 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"feec6f3c-c20e-4b41-bd9a-d3e32087acd6","Type":"ContainerDied","Data":"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.078508 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"feec6f3c-c20e-4b41-bd9a-d3e32087acd6","Type":"ContainerDied","Data":"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.078524 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"feec6f3c-c20e-4b41-bd9a-d3e32087acd6","Type":"ContainerDied","Data":"2d2859b224ee594b270b7c4b66c87c038570aceb1a607e3b06551f6cfce39185"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.078564 4748 scope.go:117] "RemoveContainer" containerID="81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.078626 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.081176 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b9448587-thszr" event={"ID":"4da3d24c-5be3-45a4-a282-bbbd33f0dad7","Type":"ContainerStarted","Data":"2be5bd777e9e16343983698330c7a90259278ac13dabcff6cd0e97ecbec097ad"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.081258 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b9448587-thszr" event={"ID":"4da3d24c-5be3-45a4-a282-bbbd33f0dad7","Type":"ContainerStarted","Data":"f8127f1fe80a441083675256120e3fdb846ac76cb70a7f1b08e0763a6c1de9e9"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.088837 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8c30cc99-bbea-493e-8117-b4e418ebd5ef","Type":"ContainerStarted","Data":"f237e9e5edb5297133f20c1f9eda5f2f3afed30ae514b51c4a8aea9221a738b8"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.096252 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7b9d456665-xjt6m" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.171:9696/\": dial tcp 10.217.0.171:9696: connect: connection refused" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.105364 4748 generic.go:334] "Generic (PLEG): container finished" podID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerID="0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd" exitCode=0 Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.105415 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerDied","Data":"0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.105447 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d88fff67-d90f-4c7d-bf0d-711c87006f68","Type":"ContainerDied","Data":"f3fcb121671fb80441b3919ccfd06f01ae600a189b7bb37963c0c4b842bd442f"} Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.105537 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.106956 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d88fff67-d90f-4c7d-bf0d-711c87006f68" (UID: "d88fff67-d90f-4c7d-bf0d-711c87006f68"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.116854 4748 scope.go:117] "RemoveContainer" containerID="6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.127063 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.147134967 podStartE2EDuration="6.127043933s" podCreationTimestamp="2026-02-16 15:14:24 +0000 UTC" firstStartedPulling="2026-02-16 15:14:26.055623892 +0000 UTC m=+1291.747292931" lastFinishedPulling="2026-02-16 15:14:27.035532858 +0000 UTC m=+1292.727201897" observedRunningTime="2026-02-16 15:14:30.117340135 +0000 UTC m=+1295.809009174" watchObservedRunningTime="2026-02-16 15:14:30.127043933 +0000 UTC m=+1295.818712962" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.129886 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feec6f3c-c20e-4b41-bd9a-d3e32087acd6" (UID: "feec6f3c-c20e-4b41-bd9a-d3e32087acd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.153993 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154030 4748 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154045 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154073 4748 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d88fff67-d90f-4c7d-bf0d-711c87006f68-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154083 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srsp8\" (UniqueName: \"kubernetes.io/projected/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-kube-api-access-srsp8\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154093 4748 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154101 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb42r\" (UniqueName: \"kubernetes.io/projected/d88fff67-d90f-4c7d-bf0d-711c87006f68-kube-api-access-vb42r\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154110 4748 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154119 4748 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154128 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.154155 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.165722 4748 scope.go:117] "RemoveContainer" containerID="81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.166271 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a\": container with ID starting with 81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a not found: ID does not exist" containerID="81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.166307 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a"} err="failed to get container status \"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a\": rpc error: code = NotFound desc = could not find container \"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a\": container with ID starting with 81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.166328 4748 scope.go:117] "RemoveContainer" containerID="6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.166512 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9\": container with ID starting with 6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9 not found: ID does not exist" containerID="6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.166587 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9"} err="failed to get container status \"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9\": rpc error: code = NotFound desc = could not find container \"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9\": container with ID starting with 6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9 not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.166604 4748 scope.go:117] "RemoveContainer" containerID="81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.166793 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a"} err="failed to get container status \"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a\": rpc error: code = NotFound desc = could not find container \"81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a\": container with ID starting with 81a41e2dc4e6cece54d9b722f10213573ce11df03096943c10307de94ab4f25a not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.166810 4748 scope.go:117] "RemoveContainer" containerID="6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.166988 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9"} err="failed to get container status \"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9\": rpc error: code = NotFound desc = could not find container \"6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9\": container with ID starting with 6d99dccd33398383ec8c51b88ca19eb5a85605a09b7b4001fb8edabf7b1fc9c9 not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.167009 4748 scope.go:117] "RemoveContainer" containerID="f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.221005 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data" (OuterVolumeSpecName: "config-data") pod "feec6f3c-c20e-4b41-bd9a-d3e32087acd6" (UID: "feec6f3c-c20e-4b41-bd9a-d3e32087acd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.247584 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-config-data" (OuterVolumeSpecName: "config-data") pod "d88fff67-d90f-4c7d-bf0d-711c87006f68" (UID: "d88fff67-d90f-4c7d-bf0d-711c87006f68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.258192 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.258219 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feec6f3c-c20e-4b41-bd9a-d3e32087acd6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.263975 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.265937 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d88fff67-d90f-4c7d-bf0d-711c87006f68" (UID: "d88fff67-d90f-4c7d-bf0d-711c87006f68"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.335522 4748 scope.go:117] "RemoveContainer" containerID="407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.353486 4748 scope.go:117] "RemoveContainer" containerID="0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.359702 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d88fff67-d90f-4c7d-bf0d-711c87006f68-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.375002 4748 scope.go:117] "RemoveContainer" containerID="543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.393627 4748 scope.go:117] "RemoveContainer" containerID="f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.394057 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff\": container with ID starting with f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff not found: ID does not exist" containerID="f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.394096 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff"} err="failed to get container status \"f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff\": rpc error: code = NotFound desc = could not find container \"f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff\": container with ID starting with f2a40ff99918aef8ecbde2d8a4114240af4728c8f39a778ef25e2810abeffcff not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.394123 4748 scope.go:117] "RemoveContainer" containerID="407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.394329 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34\": container with ID starting with 407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34 not found: ID does not exist" containerID="407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.394358 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34"} err="failed to get container status \"407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34\": rpc error: code = NotFound desc = could not find container \"407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34\": container with ID starting with 407c3e021318974d63a29407698ba23650392fdb557de86d8fbd9bed97aa1f34 not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.394378 4748 scope.go:117] "RemoveContainer" containerID="0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.394590 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd\": container with ID starting with 0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd not found: ID does not exist" containerID="0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.394617 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd"} err="failed to get container status \"0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd\": rpc error: code = NotFound desc = could not find container \"0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd\": container with ID starting with 0bdb3bb177fd167055c60d15c0303177a962cb6aac0a8e736f94ce310f7b6fcd not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.394634 4748 scope.go:117] "RemoveContainer" containerID="543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.394888 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828\": container with ID starting with 543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828 not found: ID does not exist" containerID="543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.394913 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828"} err="failed to get container status \"543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828\": rpc error: code = NotFound desc = could not find container \"543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828\": container with ID starting with 543d36abf711ed3ca910ca7abdeed2e2e67d265484652beef82986f933258828 not found: ID does not exist" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.437355 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.446526 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.467781 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.494118 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.505895 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.506313 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-central-agent" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506328 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-central-agent" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.506337 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-notification-agent" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506344 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-notification-agent" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.506364 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506370 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.506380 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="sg-core" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506385 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="sg-core" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.506393 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api-log" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506398 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api-log" Feb 16 15:14:30 crc kubenswrapper[4748]: E0216 15:14:30.506420 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="proxy-httpd" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506426 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="proxy-httpd" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506589 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-central-agent" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506603 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api-log" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506613 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" containerName="cinder-api" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506624 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="sg-core" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506635 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="ceilometer-notification-agent" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.506642 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" containerName="proxy-httpd" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.507783 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.513641 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.513863 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.519359 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.521740 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.542624 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.545808 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.550662 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.551014 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.630796 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.666756 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-logs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.666835 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-config-data\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.666911 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq59n\" (UniqueName: \"kubernetes.io/projected/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-kube-api-access-zq59n\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.666944 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.666991 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69x2s\" (UniqueName: \"kubernetes.io/projected/27bb3fa2-44eb-471c-a426-77d77d572ebb-kube-api-access-69x2s\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667017 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667045 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-log-httpd\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667115 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-config-data-custom\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667141 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-run-httpd\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667222 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667261 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667294 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667344 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667397 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-scripts\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667439 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-scripts\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.667469 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-config-data\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769375 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769456 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769493 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769552 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769580 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-scripts\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769635 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-scripts\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769680 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-config-data\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769779 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-logs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769830 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-config-data\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769910 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq59n\" (UniqueName: \"kubernetes.io/projected/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-kube-api-access-zq59n\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.769970 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.770015 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69x2s\" (UniqueName: \"kubernetes.io/projected/27bb3fa2-44eb-471c-a426-77d77d572ebb-kube-api-access-69x2s\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.770061 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.770090 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-log-httpd\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.770171 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-config-data-custom\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.770227 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-run-httpd\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.771082 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-run-httpd\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.772804 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-etc-machine-id\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.777704 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-log-httpd\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.778021 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-logs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.778605 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-public-tls-certs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.778879 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-scripts\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.780526 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.780700 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-config-data\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.782040 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-scripts\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.791899 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-config-data\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.792634 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.793165 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.795208 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.806015 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-config-data-custom\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.834728 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq59n\" (UniqueName: \"kubernetes.io/projected/0918b3f4-1fe6-4778-a6bb-ff623ae2bf57-kube-api-access-zq59n\") pod \"cinder-api-0\" (UID: \"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57\") " pod="openstack/cinder-api-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.855552 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69x2s\" (UniqueName: \"kubernetes.io/projected/27bb3fa2-44eb-471c-a426-77d77d572ebb-kube-api-access-69x2s\") pod \"ceilometer-0\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " pod="openstack/ceilometer-0" Feb 16 15:14:30 crc kubenswrapper[4748]: I0216 15:14:30.873824 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.017660 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d88fff67-d90f-4c7d-bf0d-711c87006f68" path="/var/lib/kubelet/pods/d88fff67-d90f-4c7d-bf0d-711c87006f68/volumes" Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.019210 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feec6f3c-c20e-4b41-bd9a-d3e32087acd6" path="/var/lib/kubelet/pods/feec6f3c-c20e-4b41-bd9a-d3e32087acd6/volumes" Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.152896 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.192275 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b9448587-thszr" event={"ID":"4da3d24c-5be3-45a4-a282-bbbd33f0dad7","Type":"ContainerStarted","Data":"9bdb023bfd474182eb4255c84711bd565c4dbcb6cdc88491957473a2473ebcae"} Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.192381 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.225830 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b9448587-thszr" podStartSLOduration=3.225812009 podStartE2EDuration="3.225812009s" podCreationTimestamp="2026-02-16 15:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:31.219585607 +0000 UTC m=+1296.911254676" watchObservedRunningTime="2026-02-16 15:14:31.225812009 +0000 UTC m=+1296.917481038" Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.398948 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.413838 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:14:31 crc kubenswrapper[4748]: W0216 15:14:31.744253 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0918b3f4_1fe6_4778_a6bb_ff623ae2bf57.slice/crio-e7775cab8700edf41dc389e2e1ee86d4dd6da25de21580a6326af4b11c395489 WatchSource:0}: Error finding container e7775cab8700edf41dc389e2e1ee86d4dd6da25de21580a6326af4b11c395489: Status 404 returned error can't find the container with id e7775cab8700edf41dc389e2e1ee86d4dd6da25de21580a6326af4b11c395489 Feb 16 15:14:31 crc kubenswrapper[4748]: I0216 15:14:31.746750 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.221614 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57","Type":"ContainerStarted","Data":"e7775cab8700edf41dc389e2e1ee86d4dd6da25de21580a6326af4b11c395489"} Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.226824 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerStarted","Data":"9954b77e43b9f468da1f1e37029209c0698960358bcce91fc3ff44412481255b"} Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.762964 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.926497 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkc6b\" (UniqueName: \"kubernetes.io/projected/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-kube-api-access-jkc6b\") pod \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.926670 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-logs\") pod \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.926768 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data\") pod \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.926805 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-combined-ca-bundle\") pod \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.926850 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data-custom\") pod \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\" (UID: \"0c0f602a-8a4c-4ad7-bac0-fc36623452d4\") " Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.927438 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-logs" (OuterVolumeSpecName: "logs") pod "0c0f602a-8a4c-4ad7-bac0-fc36623452d4" (UID: "0c0f602a-8a4c-4ad7-bac0-fc36623452d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.931875 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-kube-api-access-jkc6b" (OuterVolumeSpecName: "kube-api-access-jkc6b") pod "0c0f602a-8a4c-4ad7-bac0-fc36623452d4" (UID: "0c0f602a-8a4c-4ad7-bac0-fc36623452d4"). InnerVolumeSpecName "kube-api-access-jkc6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.932450 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0c0f602a-8a4c-4ad7-bac0-fc36623452d4" (UID: "0c0f602a-8a4c-4ad7-bac0-fc36623452d4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:32 crc kubenswrapper[4748]: I0216 15:14:32.955812 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0c0f602a-8a4c-4ad7-bac0-fc36623452d4" (UID: "0c0f602a-8a4c-4ad7-bac0-fc36623452d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:32.999885 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data" (OuterVolumeSpecName: "config-data") pod "0c0f602a-8a4c-4ad7-bac0-fc36623452d4" (UID: "0c0f602a-8a4c-4ad7-bac0-fc36623452d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.030225 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkc6b\" (UniqueName: \"kubernetes.io/projected/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-kube-api-access-jkc6b\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.030268 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.030284 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.030542 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.030560 4748 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c0f602a-8a4c-4ad7-bac0-fc36623452d4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.239345 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerStarted","Data":"eff7e1fc4ce23c6c0136633e8d7a15244df50c716c8a76c498e1940768f417e4"} Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.242448 4748 generic.go:334] "Generic (PLEG): container finished" podID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerID="0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a" exitCode=0 Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.242541 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c97479c48-gl8bw" event={"ID":"0c0f602a-8a4c-4ad7-bac0-fc36623452d4","Type":"ContainerDied","Data":"0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a"} Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.242579 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c97479c48-gl8bw" event={"ID":"0c0f602a-8a4c-4ad7-bac0-fc36623452d4","Type":"ContainerDied","Data":"5633e87e85f49c753fb73f65c9b9839b5af1874489eccff2f043d870f692b19b"} Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.242600 4748 scope.go:117] "RemoveContainer" containerID="0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.242754 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c97479c48-gl8bw" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.247641 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57","Type":"ContainerStarted","Data":"d52e2b8f214ca49ce4f77d69cbb76eb220729b145c37948d3fb7b5f710894ad9"} Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.390096 4748 scope.go:117] "RemoveContainer" containerID="6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.410219 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6c97479c48-gl8bw"] Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.420763 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6c97479c48-gl8bw"] Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.423233 4748 scope.go:117] "RemoveContainer" containerID="0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a" Feb 16 15:14:33 crc kubenswrapper[4748]: E0216 15:14:33.423728 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a\": container with ID starting with 0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a not found: ID does not exist" containerID="0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.423768 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a"} err="failed to get container status \"0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a\": rpc error: code = NotFound desc = could not find container \"0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a\": container with ID starting with 0460ec030671663b3909a56f433f08d4135a36e8b689e4812f051ae70345d87a not found: ID does not exist" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.423804 4748 scope.go:117] "RemoveContainer" containerID="6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d" Feb 16 15:14:33 crc kubenswrapper[4748]: E0216 15:14:33.424114 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d\": container with ID starting with 6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d not found: ID does not exist" containerID="6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d" Feb 16 15:14:33 crc kubenswrapper[4748]: I0216 15:14:33.424148 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d"} err="failed to get container status \"6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d\": rpc error: code = NotFound desc = could not find container \"6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d\": container with ID starting with 6c29741df474e478e3616325787b15ee8b328c4b0d638fc1ef0f839f9a9ff24d not found: ID does not exist" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.265723 4748 generic.go:334] "Generic (PLEG): container finished" podID="3938329a-9d91-481e-9993-09917f2c7686" containerID="bed6d02f53dd73b01cb8e00702a6ecbf387f6806345e2ae8294b7b0fc3a8d9af" exitCode=0 Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.265885 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9d456665-xjt6m" event={"ID":"3938329a-9d91-481e-9993-09917f2c7686","Type":"ContainerDied","Data":"bed6d02f53dd73b01cb8e00702a6ecbf387f6806345e2ae8294b7b0fc3a8d9af"} Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.277091 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerStarted","Data":"18a5d273becab93e2fb1eea919fa5847d1c7bcfb99afe478dcad78e3af436f68"} Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.277139 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerStarted","Data":"61f7f4fc68ccf8ea325f127697c3eea9932cf6a6a11b6f69b77f987dfbb46bac"} Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.283945 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"0918b3f4-1fe6-4778-a6bb-ff623ae2bf57","Type":"ContainerStarted","Data":"a70127cd80a4a71d5756a29cfad0e2bf3d40d99e4a61d64f61695b8c1d40d56a"} Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.284674 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.313729 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.3136980959999995 podStartE2EDuration="4.313698096s" podCreationTimestamp="2026-02-16 15:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:34.306205192 +0000 UTC m=+1299.997874231" watchObservedRunningTime="2026-02-16 15:14:34.313698096 +0000 UTC m=+1300.005367135" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.451247 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.566443 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-internal-tls-certs\") pod \"3938329a-9d91-481e-9993-09917f2c7686\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.566683 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-public-tls-certs\") pod \"3938329a-9d91-481e-9993-09917f2c7686\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.566837 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4lnr\" (UniqueName: \"kubernetes.io/projected/3938329a-9d91-481e-9993-09917f2c7686-kube-api-access-k4lnr\") pod \"3938329a-9d91-481e-9993-09917f2c7686\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.566911 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-combined-ca-bundle\") pod \"3938329a-9d91-481e-9993-09917f2c7686\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.567290 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-config\") pod \"3938329a-9d91-481e-9993-09917f2c7686\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.567363 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-httpd-config\") pod \"3938329a-9d91-481e-9993-09917f2c7686\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.567395 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-ovndb-tls-certs\") pod \"3938329a-9d91-481e-9993-09917f2c7686\" (UID: \"3938329a-9d91-481e-9993-09917f2c7686\") " Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.606846 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "3938329a-9d91-481e-9993-09917f2c7686" (UID: "3938329a-9d91-481e-9993-09917f2c7686"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.606864 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3938329a-9d91-481e-9993-09917f2c7686-kube-api-access-k4lnr" (OuterVolumeSpecName: "kube-api-access-k4lnr") pod "3938329a-9d91-481e-9993-09917f2c7686" (UID: "3938329a-9d91-481e-9993-09917f2c7686"). InnerVolumeSpecName "kube-api-access-k4lnr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.647023 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3938329a-9d91-481e-9993-09917f2c7686" (UID: "3938329a-9d91-481e-9993-09917f2c7686"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.677434 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4lnr\" (UniqueName: \"kubernetes.io/projected/3938329a-9d91-481e-9993-09917f2c7686-kube-api-access-k4lnr\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.677478 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.677492 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.683913 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3938329a-9d91-481e-9993-09917f2c7686" (UID: "3938329a-9d91-481e-9993-09917f2c7686"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.706809 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3938329a-9d91-481e-9993-09917f2c7686" (UID: "3938329a-9d91-481e-9993-09917f2c7686"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.723542 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-config" (OuterVolumeSpecName: "config") pod "3938329a-9d91-481e-9993-09917f2c7686" (UID: "3938329a-9d91-481e-9993-09917f2c7686"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.730650 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "3938329a-9d91-481e-9993-09917f2c7686" (UID: "3938329a-9d91-481e-9993-09917f2c7686"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.789420 4748 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.789457 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.789471 4748 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:34 crc kubenswrapper[4748]: I0216 15:14:34.789482 4748 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3938329a-9d91-481e-9993-09917f2c7686-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.006816 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" path="/var/lib/kubelet/pods/0c0f602a-8a4c-4ad7-bac0-fc36623452d4/volumes" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.298324 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b9d456665-xjt6m" event={"ID":"3938329a-9d91-481e-9993-09917f2c7686","Type":"ContainerDied","Data":"1d8ba83511e7b3145fd3469bbeebd41caa58bac6caec18aedbe55cb40a1a3187"} Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.299099 4748 scope.go:117] "RemoveContainer" containerID="4c5d1586b5c082cdfc2a2efc11e6c42eecd94344390310c446a7cf8f5c626ab7" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.298361 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b9d456665-xjt6m" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.351847 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b9d456665-xjt6m"] Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.361523 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b9d456665-xjt6m"] Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.363463 4748 scope.go:117] "RemoveContainer" containerID="bed6d02f53dd73b01cb8e00702a6ecbf387f6806345e2ae8294b7b0fc3a8d9af" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.384880 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.468479 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-p5sw7"] Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.468750 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" podUID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerName="dnsmasq-dns" containerID="cri-o://e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34" gracePeriod=10 Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.610027 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.710527 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.832000 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:35 crc kubenswrapper[4748]: I0216 15:14:35.833941 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.118876 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.121297 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6b69f6f9cb-8v6bm"] Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.121695 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerName="init" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.121715 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerName="init" Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.121743 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerName="dnsmasq-dns" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.121750 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerName="dnsmasq-dns" Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.121765 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-api" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.121773 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-api" Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.121781 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api-log" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.121787 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api-log" Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.121815 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.121820 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api" Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.121833 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-httpd" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.121840 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-httpd" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.122012 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-httpd" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.122024 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.122037 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="3938329a-9d91-481e-9993-09917f2c7686" containerName="neutron-api" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.122050 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerName="dnsmasq-dns" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.122061 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c0f602a-8a4c-4ad7-bac0-fc36623452d4" containerName="barbican-api-log" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.123097 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.136556 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b69f6f9cb-8v6bm"] Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.227121 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-config\") pod \"6ea74e1b-d321-4b71-9e84-55af4ab109af\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.227178 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f69hz\" (UniqueName: \"kubernetes.io/projected/6ea74e1b-d321-4b71-9e84-55af4ab109af-kube-api-access-f69hz\") pod \"6ea74e1b-d321-4b71-9e84-55af4ab109af\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.227220 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-swift-storage-0\") pod \"6ea74e1b-d321-4b71-9e84-55af4ab109af\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.227295 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-svc\") pod \"6ea74e1b-d321-4b71-9e84-55af4ab109af\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.227410 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-sb\") pod \"6ea74e1b-d321-4b71-9e84-55af4ab109af\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.227514 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-nb\") pod \"6ea74e1b-d321-4b71-9e84-55af4ab109af\" (UID: \"6ea74e1b-d321-4b71-9e84-55af4ab109af\") " Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.227956 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-combined-ca-bundle\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.228035 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-logs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.228070 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-public-tls-certs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.228142 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-config-data\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.228230 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l467j\" (UniqueName: \"kubernetes.io/projected/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-kube-api-access-l467j\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.228327 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-scripts\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.228349 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-internal-tls-certs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.231878 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea74e1b-d321-4b71-9e84-55af4ab109af-kube-api-access-f69hz" (OuterVolumeSpecName: "kube-api-access-f69hz") pod "6ea74e1b-d321-4b71-9e84-55af4ab109af" (UID: "6ea74e1b-d321-4b71-9e84-55af4ab109af"). InnerVolumeSpecName "kube-api-access-f69hz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.283669 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ea74e1b-d321-4b71-9e84-55af4ab109af" (UID: "6ea74e1b-d321-4b71-9e84-55af4ab109af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.300602 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ea74e1b-d321-4b71-9e84-55af4ab109af" (UID: "6ea74e1b-d321-4b71-9e84-55af4ab109af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.308049 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-config" (OuterVolumeSpecName: "config") pod "6ea74e1b-d321-4b71-9e84-55af4ab109af" (UID: "6ea74e1b-d321-4b71-9e84-55af4ab109af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.311478 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerStarted","Data":"cbacba8a813db7660bbc0e805e4e746028afc1c0e70e459562f6f922abb62e0d"} Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.311838 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.327533 4748 generic.go:334] "Generic (PLEG): container finished" podID="6ea74e1b-d321-4b71-9e84-55af4ab109af" containerID="e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34" exitCode=0 Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.328470 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.330770 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" event={"ID":"6ea74e1b-d321-4b71-9e84-55af4ab109af","Type":"ContainerDied","Data":"e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34"} Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.330844 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-p5sw7" event={"ID":"6ea74e1b-d321-4b71-9e84-55af4ab109af","Type":"ContainerDied","Data":"c807894d1d9a42eb024f144614e204e2404031d320fb6378b37f8882f5346358"} Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.330869 4748 scope.go:117] "RemoveContainer" containerID="e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.331273 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="cinder-scheduler" containerID="cri-o://317e09e74bd0ba6c6b9fd3b12c57123c285b8cb03c9af430ba8ef035db9d2af8" gracePeriod=30 Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.333918 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.299590963 podStartE2EDuration="6.333900175s" podCreationTimestamp="2026-02-16 15:14:30 +0000 UTC" firstStartedPulling="2026-02-16 15:14:31.398651753 +0000 UTC m=+1297.090320792" lastFinishedPulling="2026-02-16 15:14:35.432960955 +0000 UTC m=+1301.124630004" observedRunningTime="2026-02-16 15:14:36.328368669 +0000 UTC m=+1302.020037708" watchObservedRunningTime="2026-02-16 15:14:36.333900175 +0000 UTC m=+1302.025569214" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.334543 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="probe" containerID="cri-o://f237e9e5edb5297133f20c1f9eda5f2f3afed30ae514b51c4a8aea9221a738b8" gracePeriod=30 Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.335651 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-scripts\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.335748 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-internal-tls-certs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.335940 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-combined-ca-bundle\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337152 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-logs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337197 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-public-tls-certs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337328 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-config-data\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337469 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l467j\" (UniqueName: \"kubernetes.io/projected/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-kube-api-access-l467j\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337763 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337787 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337801 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f69hz\" (UniqueName: \"kubernetes.io/projected/6ea74e1b-d321-4b71-9e84-55af4ab109af-kube-api-access-f69hz\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.337825 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.338800 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-logs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.342010 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-scripts\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.348371 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-public-tls-certs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.365490 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-internal-tls-certs\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.365696 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ea74e1b-d321-4b71-9e84-55af4ab109af" (UID: "6ea74e1b-d321-4b71-9e84-55af4ab109af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.378299 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-combined-ca-bundle\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.378957 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-config-data\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.379631 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l467j\" (UniqueName: \"kubernetes.io/projected/fd67ae0f-8630-4868-8f11-1dd56d66d7a5-kube-api-access-l467j\") pod \"placement-6b69f6f9cb-8v6bm\" (UID: \"fd67ae0f-8630-4868-8f11-1dd56d66d7a5\") " pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.412293 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ea74e1b-d321-4b71-9e84-55af4ab109af" (UID: "6ea74e1b-d321-4b71-9e84-55af4ab109af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.441233 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.441270 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ea74e1b-d321-4b71-9e84-55af4ab109af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.447458 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.550357 4748 scope.go:117] "RemoveContainer" containerID="8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.624577 4748 scope.go:117] "RemoveContainer" containerID="e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34" Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.625508 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34\": container with ID starting with e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34 not found: ID does not exist" containerID="e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.625573 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34"} err="failed to get container status \"e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34\": rpc error: code = NotFound desc = could not find container \"e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34\": container with ID starting with e6674c146565474fcb36de81274907e8662fd84d33bf04ae4b96c72f73164c34 not found: ID does not exist" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.625606 4748 scope.go:117] "RemoveContainer" containerID="8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac" Feb 16 15:14:36 crc kubenswrapper[4748]: E0216 15:14:36.626192 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac\": container with ID starting with 8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac not found: ID does not exist" containerID="8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.626229 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac"} err="failed to get container status \"8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac\": rpc error: code = NotFound desc = could not find container \"8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac\": container with ID starting with 8ae488d1c38c40c8e29bbec812460da70741a7d19e75f92b61362d38b05977ac not found: ID does not exist" Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.734562 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-p5sw7"] Feb 16 15:14:36 crc kubenswrapper[4748]: I0216 15:14:36.748178 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-p5sw7"] Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.009675 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3938329a-9d91-481e-9993-09917f2c7686" path="/var/lib/kubelet/pods/3938329a-9d91-481e-9993-09917f2c7686/volumes" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.010869 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea74e1b-d321-4b71-9e84-55af4ab109af" path="/var/lib/kubelet/pods/6ea74e1b-d321-4b71-9e84-55af4ab109af/volumes" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.077177 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6b69f6f9cb-8v6bm"] Feb 16 15:14:37 crc kubenswrapper[4748]: W0216 15:14:37.078806 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd67ae0f_8630_4868_8f11_1dd56d66d7a5.slice/crio-792577952e6762b412eab591bdd85eae34395ec319d147aeb8cf6b49d1a56349 WatchSource:0}: Error finding container 792577952e6762b412eab591bdd85eae34395ec319d147aeb8cf6b49d1a56349: Status 404 returned error can't find the container with id 792577952e6762b412eab591bdd85eae34395ec319d147aeb8cf6b49d1a56349 Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.390643 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b69f6f9cb-8v6bm" event={"ID":"fd67ae0f-8630-4868-8f11-1dd56d66d7a5","Type":"ContainerStarted","Data":"750106baafc701789e6cbeb0637fa4d7dde001fbcac0223ed0b93efaac727477"} Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.391000 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b69f6f9cb-8v6bm" event={"ID":"fd67ae0f-8630-4868-8f11-1dd56d66d7a5","Type":"ContainerStarted","Data":"792577952e6762b412eab591bdd85eae34395ec319d147aeb8cf6b49d1a56349"} Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.394443 4748 generic.go:334] "Generic (PLEG): container finished" podID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerID="f237e9e5edb5297133f20c1f9eda5f2f3afed30ae514b51c4a8aea9221a738b8" exitCode=0 Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.394485 4748 generic.go:334] "Generic (PLEG): container finished" podID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerID="317e09e74bd0ba6c6b9fd3b12c57123c285b8cb03c9af430ba8ef035db9d2af8" exitCode=0 Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.394530 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8c30cc99-bbea-493e-8117-b4e418ebd5ef","Type":"ContainerDied","Data":"f237e9e5edb5297133f20c1f9eda5f2f3afed30ae514b51c4a8aea9221a738b8"} Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.394567 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8c30cc99-bbea-493e-8117-b4e418ebd5ef","Type":"ContainerDied","Data":"317e09e74bd0ba6c6b9fd3b12c57123c285b8cb03c9af430ba8ef035db9d2af8"} Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.650869 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.780268 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c30cc99-bbea-493e-8117-b4e418ebd5ef-etc-machine-id\") pod \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.780427 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c30cc99-bbea-493e-8117-b4e418ebd5ef-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8c30cc99-bbea-493e-8117-b4e418ebd5ef" (UID: "8c30cc99-bbea-493e-8117-b4e418ebd5ef"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.780758 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data-custom\") pod \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.780803 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-scripts\") pod \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.780877 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnzkx\" (UniqueName: \"kubernetes.io/projected/8c30cc99-bbea-493e-8117-b4e418ebd5ef-kube-api-access-bnzkx\") pod \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.781014 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-combined-ca-bundle\") pod \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.781041 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data\") pod \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\" (UID: \"8c30cc99-bbea-493e-8117-b4e418ebd5ef\") " Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.781450 4748 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c30cc99-bbea-493e-8117-b4e418ebd5ef-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.786238 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-scripts" (OuterVolumeSpecName: "scripts") pod "8c30cc99-bbea-493e-8117-b4e418ebd5ef" (UID: "8c30cc99-bbea-493e-8117-b4e418ebd5ef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.787069 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8c30cc99-bbea-493e-8117-b4e418ebd5ef" (UID: "8c30cc99-bbea-493e-8117-b4e418ebd5ef"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.788447 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c30cc99-bbea-493e-8117-b4e418ebd5ef-kube-api-access-bnzkx" (OuterVolumeSpecName: "kube-api-access-bnzkx") pod "8c30cc99-bbea-493e-8117-b4e418ebd5ef" (UID: "8c30cc99-bbea-493e-8117-b4e418ebd5ef"). InnerVolumeSpecName "kube-api-access-bnzkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.845659 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c30cc99-bbea-493e-8117-b4e418ebd5ef" (UID: "8c30cc99-bbea-493e-8117-b4e418ebd5ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.883452 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.883487 4748 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.883497 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.883510 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnzkx\" (UniqueName: \"kubernetes.io/projected/8c30cc99-bbea-493e-8117-b4e418ebd5ef-kube-api-access-bnzkx\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.889473 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data" (OuterVolumeSpecName: "config-data") pod "8c30cc99-bbea-493e-8117-b4e418ebd5ef" (UID: "8c30cc99-bbea-493e-8117-b4e418ebd5ef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:37 crc kubenswrapper[4748]: I0216 15:14:37.985318 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c30cc99-bbea-493e-8117-b4e418ebd5ef-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.407759 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.412162 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8c30cc99-bbea-493e-8117-b4e418ebd5ef","Type":"ContainerDied","Data":"562af5973f05709c153075b0cc0032d9fd6056904e847046a7dc1cb63f04bae3"} Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.412205 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.412217 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.412225 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6b69f6f9cb-8v6bm" event={"ID":"fd67ae0f-8630-4868-8f11-1dd56d66d7a5","Type":"ContainerStarted","Data":"a652d1bec50be170256705b39002118981f6195f31ff6906cf32916b8f2eb254"} Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.412245 4748 scope.go:117] "RemoveContainer" containerID="f237e9e5edb5297133f20c1f9eda5f2f3afed30ae514b51c4a8aea9221a738b8" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.439950 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6b69f6f9cb-8v6bm" podStartSLOduration=2.439932657 podStartE2EDuration="2.439932657s" podCreationTimestamp="2026-02-16 15:14:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:38.436513993 +0000 UTC m=+1304.128183052" watchObservedRunningTime="2026-02-16 15:14:38.439932657 +0000 UTC m=+1304.131601696" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.459676 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.460160 4748 scope.go:117] "RemoveContainer" containerID="317e09e74bd0ba6c6b9fd3b12c57123c285b8cb03c9af430ba8ef035db9d2af8" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.473513 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.509331 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:38 crc kubenswrapper[4748]: E0216 15:14:38.510055 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="probe" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.510080 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="probe" Feb 16 15:14:38 crc kubenswrapper[4748]: E0216 15:14:38.510094 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="cinder-scheduler" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.510101 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="cinder-scheduler" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.510389 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="cinder-scheduler" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.510422 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" containerName="probe" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.511813 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.520586 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.554238 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.601619 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.601687 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsf45\" (UniqueName: \"kubernetes.io/projected/99782bb5-c485-441e-9a7e-9225582d84bc-kube-api-access-zsf45\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.601758 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99782bb5-c485-441e-9a7e-9225582d84bc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.601805 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-config-data\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.602118 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.602245 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-scripts\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.704366 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-scripts\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.704908 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.705009 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsf45\" (UniqueName: \"kubernetes.io/projected/99782bb5-c485-441e-9a7e-9225582d84bc-kube-api-access-zsf45\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.705105 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99782bb5-c485-441e-9a7e-9225582d84bc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.705180 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-config-data\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.705298 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/99782bb5-c485-441e-9a7e-9225582d84bc-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.705323 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.708616 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-scripts\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.709208 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.709966 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-config-data\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.715666 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99782bb5-c485-441e-9a7e-9225582d84bc-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.721564 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsf45\" (UniqueName: \"kubernetes.io/projected/99782bb5-c485-441e-9a7e-9225582d84bc-kube-api-access-zsf45\") pod \"cinder-scheduler-0\" (UID: \"99782bb5-c485-441e-9a7e-9225582d84bc\") " pod="openstack/cinder-scheduler-0" Feb 16 15:14:38 crc kubenswrapper[4748]: I0216 15:14:38.846877 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 16 15:14:39 crc kubenswrapper[4748]: I0216 15:14:39.009891 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c30cc99-bbea-493e-8117-b4e418ebd5ef" path="/var/lib/kubelet/pods/8c30cc99-bbea-493e-8117-b4e418ebd5ef/volumes" Feb 16 15:14:39 crc kubenswrapper[4748]: I0216 15:14:39.316158 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 16 15:14:39 crc kubenswrapper[4748]: W0216 15:14:39.324345 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99782bb5_c485_441e_9a7e_9225582d84bc.slice/crio-0d8127e5d48dafcb7305b50c5d3b0450e291a22a9b0fa3c0eee85845e8a84511 WatchSource:0}: Error finding container 0d8127e5d48dafcb7305b50c5d3b0450e291a22a9b0fa3c0eee85845e8a84511: Status 404 returned error can't find the container with id 0d8127e5d48dafcb7305b50c5d3b0450e291a22a9b0fa3c0eee85845e8a84511 Feb 16 15:14:39 crc kubenswrapper[4748]: I0216 15:14:39.424693 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"99782bb5-c485-441e-9a7e-9225582d84bc","Type":"ContainerStarted","Data":"0d8127e5d48dafcb7305b50c5d3b0450e291a22a9b0fa3c0eee85845e8a84511"} Feb 16 15:14:40 crc kubenswrapper[4748]: I0216 15:14:40.441046 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"99782bb5-c485-441e-9a7e-9225582d84bc","Type":"ContainerStarted","Data":"30cc673faa9f184f36eec3280189eac00bb3ee81f2600c51e36d3d175815bb2d"} Feb 16 15:14:41 crc kubenswrapper[4748]: I0216 15:14:41.453067 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"99782bb5-c485-441e-9a7e-9225582d84bc","Type":"ContainerStarted","Data":"0c73e78666ff410bcdbdbad782173677335a4884f0d78320195484c178b5ee00"} Feb 16 15:14:41 crc kubenswrapper[4748]: I0216 15:14:41.473234 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.473216736 podStartE2EDuration="3.473216736s" podCreationTimestamp="2026-02-16 15:14:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:41.469536396 +0000 UTC m=+1307.161205435" watchObservedRunningTime="2026-02-16 15:14:41.473216736 +0000 UTC m=+1307.164885775" Feb 16 15:14:42 crc kubenswrapper[4748]: E0216 15:14:41.999605 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:14:43 crc kubenswrapper[4748]: I0216 15:14:43.107355 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 16 15:14:43 crc kubenswrapper[4748]: I0216 15:14:43.847071 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 16 15:14:44 crc kubenswrapper[4748]: I0216 15:14:44.624220 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69b5b5cc64-hzmw7" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.662109 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.663560 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.669617 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.669693 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.669764 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-7nm5s" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.681398 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.780745 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fc31461-7669-46dc-ab65-839d0dc6b753-openstack-config\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.780794 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fc31461-7669-46dc-ab65-839d0dc6b753-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.780999 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc31461-7669-46dc-ab65-839d0dc6b753-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.781405 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lqt4\" (UniqueName: \"kubernetes.io/projected/7fc31461-7669-46dc-ab65-839d0dc6b753-kube-api-access-4lqt4\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.883605 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fc31461-7669-46dc-ab65-839d0dc6b753-openstack-config\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.883912 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fc31461-7669-46dc-ab65-839d0dc6b753-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.884009 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc31461-7669-46dc-ab65-839d0dc6b753-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.884149 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lqt4\" (UniqueName: \"kubernetes.io/projected/7fc31461-7669-46dc-ab65-839d0dc6b753-kube-api-access-4lqt4\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.884635 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7fc31461-7669-46dc-ab65-839d0dc6b753-openstack-config\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.889318 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc31461-7669-46dc-ab65-839d0dc6b753-combined-ca-bundle\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.891217 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7fc31461-7669-46dc-ab65-839d0dc6b753-openstack-config-secret\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.902833 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lqt4\" (UniqueName: \"kubernetes.io/projected/7fc31461-7669-46dc-ab65-839d0dc6b753-kube-api-access-4lqt4\") pod \"openstackclient\" (UID: \"7fc31461-7669-46dc-ab65-839d0dc6b753\") " pod="openstack/openstackclient" Feb 16 15:14:45 crc kubenswrapper[4748]: I0216 15:14:45.989031 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 16 15:14:46 crc kubenswrapper[4748]: I0216 15:14:46.458833 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 16 15:14:46 crc kubenswrapper[4748]: W0216 15:14:46.459614 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fc31461_7669_46dc_ab65_839d0dc6b753.slice/crio-dd00e01718f70a8b69959083c31c4d8b8fdd6f4b15b84817cfba0e50eaa88fea WatchSource:0}: Error finding container dd00e01718f70a8b69959083c31c4d8b8fdd6f4b15b84817cfba0e50eaa88fea: Status 404 returned error can't find the container with id dd00e01718f70a8b69959083c31c4d8b8fdd6f4b15b84817cfba0e50eaa88fea Feb 16 15:14:46 crc kubenswrapper[4748]: I0216 15:14:46.516101 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fc31461-7669-46dc-ab65-839d0dc6b753","Type":"ContainerStarted","Data":"dd00e01718f70a8b69959083c31c4d8b8fdd6f4b15b84817cfba0e50eaa88fea"} Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.126404 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.291701 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-588bd888d5-jbdss"] Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.293543 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.297084 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.297558 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.299479 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.346751 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-588bd888d5-jbdss"] Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.369897 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-public-tls-certs\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.370217 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-internal-tls-certs\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.370325 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-config-data\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.370446 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395a5c55-9892-4842-bf7b-ba42077818d3-log-httpd\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.371596 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pksrb\" (UniqueName: \"kubernetes.io/projected/395a5c55-9892-4842-bf7b-ba42077818d3-kube-api-access-pksrb\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.371699 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/395a5c55-9892-4842-bf7b-ba42077818d3-etc-swift\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.371891 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395a5c55-9892-4842-bf7b-ba42077818d3-run-httpd\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.371972 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-combined-ca-bundle\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474336 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395a5c55-9892-4842-bf7b-ba42077818d3-log-httpd\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474413 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pksrb\" (UniqueName: \"kubernetes.io/projected/395a5c55-9892-4842-bf7b-ba42077818d3-kube-api-access-pksrb\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474446 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/395a5c55-9892-4842-bf7b-ba42077818d3-etc-swift\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474599 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395a5c55-9892-4842-bf7b-ba42077818d3-run-httpd\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474630 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-combined-ca-bundle\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474687 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-public-tls-certs\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474762 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-internal-tls-certs\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.474815 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-config-data\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.475068 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395a5c55-9892-4842-bf7b-ba42077818d3-log-httpd\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.475880 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/395a5c55-9892-4842-bf7b-ba42077818d3-run-httpd\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.481152 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-internal-tls-certs\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.481542 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-combined-ca-bundle\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.482339 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/395a5c55-9892-4842-bf7b-ba42077818d3-etc-swift\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.484473 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-public-tls-certs\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.495888 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395a5c55-9892-4842-bf7b-ba42077818d3-config-data\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.506930 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pksrb\" (UniqueName: \"kubernetes.io/projected/395a5c55-9892-4842-bf7b-ba42077818d3-kube-api-access-pksrb\") pod \"swift-proxy-588bd888d5-jbdss\" (UID: \"395a5c55-9892-4842-bf7b-ba42077818d3\") " pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:49 crc kubenswrapper[4748]: I0216 15:14:49.648488 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:50 crc kubenswrapper[4748]: I0216 15:14:50.284577 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-588bd888d5-jbdss"] Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.181842 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.182575 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-central-agent" containerID="cri-o://eff7e1fc4ce23c6c0136633e8d7a15244df50c716c8a76c498e1940768f417e4" gracePeriod=30 Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.182809 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="proxy-httpd" containerID="cri-o://cbacba8a813db7660bbc0e805e4e746028afc1c0e70e459562f6f922abb62e0d" gracePeriod=30 Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.182856 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="sg-core" containerID="cri-o://18a5d273becab93e2fb1eea919fa5847d1c7bcfb99afe478dcad78e3af436f68" gracePeriod=30 Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.182889 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-notification-agent" containerID="cri-o://61f7f4fc68ccf8ea325f127697c3eea9932cf6a6a11b6f69b77f987dfbb46bac" gracePeriod=30 Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.199026 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.188:3000/\": EOF" Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.578848 4748 generic.go:334] "Generic (PLEG): container finished" podID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerID="cbacba8a813db7660bbc0e805e4e746028afc1c0e70e459562f6f922abb62e0d" exitCode=0 Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.579119 4748 generic.go:334] "Generic (PLEG): container finished" podID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerID="18a5d273becab93e2fb1eea919fa5847d1c7bcfb99afe478dcad78e3af436f68" exitCode=2 Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.578943 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerDied","Data":"cbacba8a813db7660bbc0e805e4e746028afc1c0e70e459562f6f922abb62e0d"} Feb 16 15:14:51 crc kubenswrapper[4748]: I0216 15:14:51.579159 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerDied","Data":"18a5d273becab93e2fb1eea919fa5847d1c7bcfb99afe478dcad78e3af436f68"} Feb 16 15:14:52 crc kubenswrapper[4748]: I0216 15:14:52.600750 4748 generic.go:334] "Generic (PLEG): container finished" podID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerID="eff7e1fc4ce23c6c0136633e8d7a15244df50c716c8a76c498e1940768f417e4" exitCode=0 Feb 16 15:14:52 crc kubenswrapper[4748]: I0216 15:14:52.600802 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerDied","Data":"eff7e1fc4ce23c6c0136633e8d7a15244df50c716c8a76c498e1940768f417e4"} Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.027157 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-qqx9c"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.028345 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qqx9c"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.028427 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.184198 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-ndnqn"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.185646 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.189917 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx7nn\" (UniqueName: \"kubernetes.io/projected/79ac09ea-96de-4c66-b3ba-f41bf6993859-kube-api-access-gx7nn\") pod \"nova-api-db-create-qqx9c\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.190047 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ac09ea-96de-4c66-b3ba-f41bf6993859-operator-scripts\") pod \"nova-api-db-create-qqx9c\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.201243 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-5d6f-account-create-update-zt5xw"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.202676 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.212896 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ndnqn"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.218406 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.231157 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5d6f-account-create-update-zt5xw"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.282986 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-6r2l8"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.284339 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.292879 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec9879-a2a4-429f-af70-796e8246fce9-operator-scripts\") pod \"nova-api-5d6f-account-create-update-zt5xw\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.292974 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gx7nn\" (UniqueName: \"kubernetes.io/projected/79ac09ea-96de-4c66-b3ba-f41bf6993859-kube-api-access-gx7nn\") pod \"nova-api-db-create-qqx9c\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.293010 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74qn5\" (UniqueName: \"kubernetes.io/projected/c3ec9879-a2a4-429f-af70-796e8246fce9-kube-api-access-74qn5\") pod \"nova-api-5d6f-account-create-update-zt5xw\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.293106 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ac09ea-96de-4c66-b3ba-f41bf6993859-operator-scripts\") pod \"nova-api-db-create-qqx9c\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.293134 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-operator-scripts\") pod \"nova-cell0-db-create-ndnqn\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.293202 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6gg\" (UniqueName: \"kubernetes.io/projected/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-kube-api-access-bn6gg\") pod \"nova-cell0-db-create-ndnqn\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.294158 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ac09ea-96de-4c66-b3ba-f41bf6993859-operator-scripts\") pod \"nova-api-db-create-qqx9c\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.315800 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6r2l8"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.334829 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gx7nn\" (UniqueName: \"kubernetes.io/projected/79ac09ea-96de-4c66-b3ba-f41bf6993859-kube-api-access-gx7nn\") pod \"nova-api-db-create-qqx9c\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.380024 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.395068 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7846c19e-8dd8-4362-8324-3f23e257f4f4-operator-scripts\") pod \"nova-cell1-db-create-6r2l8\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.395182 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-operator-scripts\") pod \"nova-cell0-db-create-ndnqn\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.395245 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6gg\" (UniqueName: \"kubernetes.io/projected/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-kube-api-access-bn6gg\") pod \"nova-cell0-db-create-ndnqn\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.395294 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec9879-a2a4-429f-af70-796e8246fce9-operator-scripts\") pod \"nova-api-5d6f-account-create-update-zt5xw\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.395361 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74qn5\" (UniqueName: \"kubernetes.io/projected/c3ec9879-a2a4-429f-af70-796e8246fce9-kube-api-access-74qn5\") pod \"nova-api-5d6f-account-create-update-zt5xw\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.395402 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b49h\" (UniqueName: \"kubernetes.io/projected/7846c19e-8dd8-4362-8324-3f23e257f4f4-kube-api-access-9b49h\") pod \"nova-cell1-db-create-6r2l8\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.396221 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-operator-scripts\") pod \"nova-cell0-db-create-ndnqn\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.397234 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec9879-a2a4-429f-af70-796e8246fce9-operator-scripts\") pod \"nova-api-5d6f-account-create-update-zt5xw\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.412294 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-6005-account-create-update-rrbfq"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.413783 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.418562 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.439238 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6005-account-create-update-rrbfq"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.446135 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6gg\" (UniqueName: \"kubernetes.io/projected/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-kube-api-access-bn6gg\") pod \"nova-cell0-db-create-ndnqn\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.447238 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74qn5\" (UniqueName: \"kubernetes.io/projected/c3ec9879-a2a4-429f-af70-796e8246fce9-kube-api-access-74qn5\") pod \"nova-api-5d6f-account-create-update-zt5xw\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.497904 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b49h\" (UniqueName: \"kubernetes.io/projected/7846c19e-8dd8-4362-8324-3f23e257f4f4-kube-api-access-9b49h\") pod \"nova-cell1-db-create-6r2l8\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.497989 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7846c19e-8dd8-4362-8324-3f23e257f4f4-operator-scripts\") pod \"nova-cell1-db-create-6r2l8\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.498985 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7846c19e-8dd8-4362-8324-3f23e257f4f4-operator-scripts\") pod \"nova-cell1-db-create-6r2l8\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.502871 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.514189 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b49h\" (UniqueName: \"kubernetes.io/projected/7846c19e-8dd8-4362-8324-3f23e257f4f4-kube-api-access-9b49h\") pod \"nova-cell1-db-create-6r2l8\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.541655 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.588193 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-4260-account-create-update-bknw6"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.590856 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.593142 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.599524 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a08b5e-7769-4af5-a383-36f041c2fc9d-operator-scripts\") pod \"nova-cell0-6005-account-create-update-rrbfq\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.599604 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtgj\" (UniqueName: \"kubernetes.io/projected/48a08b5e-7769-4af5-a383-36f041c2fc9d-kube-api-access-dvtgj\") pod \"nova-cell0-6005-account-create-update-rrbfq\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.607082 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.641279 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4260-account-create-update-bknw6"] Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.701531 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a08b5e-7769-4af5-a383-36f041c2fc9d-operator-scripts\") pod \"nova-cell0-6005-account-create-update-rrbfq\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.701639 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvtgj\" (UniqueName: \"kubernetes.io/projected/48a08b5e-7769-4af5-a383-36f041c2fc9d-kube-api-access-dvtgj\") pod \"nova-cell0-6005-account-create-update-rrbfq\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.701758 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbkcc\" (UniqueName: \"kubernetes.io/projected/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-kube-api-access-rbkcc\") pod \"nova-cell1-4260-account-create-update-bknw6\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.701797 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-operator-scripts\") pod \"nova-cell1-4260-account-create-update-bknw6\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.702830 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a08b5e-7769-4af5-a383-36f041c2fc9d-operator-scripts\") pod \"nova-cell0-6005-account-create-update-rrbfq\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.722234 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvtgj\" (UniqueName: \"kubernetes.io/projected/48a08b5e-7769-4af5-a383-36f041c2fc9d-kube-api-access-dvtgj\") pod \"nova-cell0-6005-account-create-update-rrbfq\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.803848 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbkcc\" (UniqueName: \"kubernetes.io/projected/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-kube-api-access-rbkcc\") pod \"nova-cell1-4260-account-create-update-bknw6\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.803906 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-operator-scripts\") pod \"nova-cell1-4260-account-create-update-bknw6\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.805054 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-operator-scripts\") pod \"nova-cell1-4260-account-create-update-bknw6\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.823232 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbkcc\" (UniqueName: \"kubernetes.io/projected/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-kube-api-access-rbkcc\") pod \"nova-cell1-4260-account-create-update-bknw6\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.883878 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:14:53 crc kubenswrapper[4748]: I0216 15:14:53.934340 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:14:55 crc kubenswrapper[4748]: I0216 15:14:55.667048 4748 generic.go:334] "Generic (PLEG): container finished" podID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerID="61f7f4fc68ccf8ea325f127697c3eea9932cf6a6a11b6f69b77f987dfbb46bac" exitCode=0 Feb 16 15:14:55 crc kubenswrapper[4748]: I0216 15:14:55.667143 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerDied","Data":"61f7f4fc68ccf8ea325f127697c3eea9932cf6a6a11b6f69b77f987dfbb46bac"} Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.669868 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.699387 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-588bd888d5-jbdss" event={"ID":"395a5c55-9892-4842-bf7b-ba42077818d3","Type":"ContainerStarted","Data":"c85a976c518313356f4953edea689336f103912f6454ac3b08e50dfb08283598"} Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.699441 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-588bd888d5-jbdss" event={"ID":"395a5c55-9892-4842-bf7b-ba42077818d3","Type":"ContainerStarted","Data":"d4371542c79a3d4889f67ea92511b351931c06b29c14a8961b63c4ed64cc310b"} Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.705741 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27bb3fa2-44eb-471c-a426-77d77d572ebb","Type":"ContainerDied","Data":"9954b77e43b9f468da1f1e37029209c0698960358bcce91fc3ff44412481255b"} Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.705802 4748 scope.go:117] "RemoveContainer" containerID="cbacba8a813db7660bbc0e805e4e746028afc1c0e70e459562f6f922abb62e0d" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.705962 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.732371 4748 scope.go:117] "RemoveContainer" containerID="18a5d273becab93e2fb1eea919fa5847d1c7bcfb99afe478dcad78e3af436f68" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.755006 4748 scope.go:117] "RemoveContainer" containerID="61f7f4fc68ccf8ea325f127697c3eea9932cf6a6a11b6f69b77f987dfbb46bac" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.764090 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-scripts\") pod \"27bb3fa2-44eb-471c-a426-77d77d572ebb\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.764252 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-config-data\") pod \"27bb3fa2-44eb-471c-a426-77d77d572ebb\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.764310 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-log-httpd\") pod \"27bb3fa2-44eb-471c-a426-77d77d572ebb\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.764363 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-combined-ca-bundle\") pod \"27bb3fa2-44eb-471c-a426-77d77d572ebb\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.764436 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69x2s\" (UniqueName: \"kubernetes.io/projected/27bb3fa2-44eb-471c-a426-77d77d572ebb-kube-api-access-69x2s\") pod \"27bb3fa2-44eb-471c-a426-77d77d572ebb\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.764517 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-run-httpd\") pod \"27bb3fa2-44eb-471c-a426-77d77d572ebb\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.764592 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-sg-core-conf-yaml\") pod \"27bb3fa2-44eb-471c-a426-77d77d572ebb\" (UID: \"27bb3fa2-44eb-471c-a426-77d77d572ebb\") " Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.766496 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27bb3fa2-44eb-471c-a426-77d77d572ebb" (UID: "27bb3fa2-44eb-471c-a426-77d77d572ebb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.767537 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27bb3fa2-44eb-471c-a426-77d77d572ebb" (UID: "27bb3fa2-44eb-471c-a426-77d77d572ebb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.768375 4748 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.768392 4748 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27bb3fa2-44eb-471c-a426-77d77d572ebb-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.771348 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27bb3fa2-44eb-471c-a426-77d77d572ebb-kube-api-access-69x2s" (OuterVolumeSpecName: "kube-api-access-69x2s") pod "27bb3fa2-44eb-471c-a426-77d77d572ebb" (UID: "27bb3fa2-44eb-471c-a426-77d77d572ebb"). InnerVolumeSpecName "kube-api-access-69x2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.772817 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-scripts" (OuterVolumeSpecName: "scripts") pod "27bb3fa2-44eb-471c-a426-77d77d572ebb" (UID: "27bb3fa2-44eb-471c-a426-77d77d572ebb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.782608 4748 scope.go:117] "RemoveContainer" containerID="eff7e1fc4ce23c6c0136633e8d7a15244df50c716c8a76c498e1940768f417e4" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.834933 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27bb3fa2-44eb-471c-a426-77d77d572ebb" (UID: "27bb3fa2-44eb-471c-a426-77d77d572ebb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.856315 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-6005-account-create-update-rrbfq"] Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.872417 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.872589 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69x2s\" (UniqueName: \"kubernetes.io/projected/27bb3fa2-44eb-471c-a426-77d77d572ebb-kube-api-access-69x2s\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.872670 4748 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.874231 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 16 15:14:56 crc kubenswrapper[4748]: I0216 15:14:56.904877 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27bb3fa2-44eb-471c-a426-77d77d572ebb" (UID: "27bb3fa2-44eb-471c-a426-77d77d572ebb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:56 crc kubenswrapper[4748]: W0216 15:14:56.962295 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7846c19e_8dd8_4362_8324_3f23e257f4f4.slice/crio-dca93980336b2dee5687f13887126fab40bae0131261fe79b2fcdb9fef5063db WatchSource:0}: Error finding container dca93980336b2dee5687f13887126fab40bae0131261fe79b2fcdb9fef5063db: Status 404 returned error can't find the container with id dca93980336b2dee5687f13887126fab40bae0131261fe79b2fcdb9fef5063db Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.006751 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-config-data" (OuterVolumeSpecName: "config-data") pod "27bb3fa2-44eb-471c-a426-77d77d572ebb" (UID: "27bb3fa2-44eb-471c-a426-77d77d572ebb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.019968 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.025988 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.026015 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27bb3fa2-44eb-471c-a426-77d77d572ebb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.092432 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-6r2l8"] Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.092472 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-4260-account-create-update-bknw6"] Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.123152 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-ndnqn"] Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.133084 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-5d6f-account-create-update-zt5xw"] Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.143613 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-qqx9c"] Feb 16 15:14:57 crc kubenswrapper[4748]: W0216 15:14:57.160453 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3ec9879_a2a4_429f_af70_796e8246fce9.slice/crio-7fd8a298c911a1af2055ea634f7f65b0243c4c89e951c5dbc708b1f0067ac008 WatchSource:0}: Error finding container 7fd8a298c911a1af2055ea634f7f65b0243c4c89e951c5dbc708b1f0067ac008: Status 404 returned error can't find the container with id 7fd8a298c911a1af2055ea634f7f65b0243c4c89e951c5dbc708b1f0067ac008 Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.168999 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 16 15:14:57 crc kubenswrapper[4748]: E0216 15:14:57.210647 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:14:57 crc kubenswrapper[4748]: E0216 15:14:57.210982 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:14:57 crc kubenswrapper[4748]: E0216 15:14:57.211259 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:14:57 crc kubenswrapper[4748]: E0216 15:14:57.212662 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.723517 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-588bd888d5-jbdss" event={"ID":"395a5c55-9892-4842-bf7b-ba42077818d3","Type":"ContainerStarted","Data":"b1cb72eb8c9d70ba5f64fabafb133f14fccec82951947dbcaa12a7ba4ee3f503"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.723926 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.723958 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.727049 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqx9c" event={"ID":"79ac09ea-96de-4c66-b3ba-f41bf6993859","Type":"ContainerStarted","Data":"f78504620b513404600edd5aa1ccbb5d5305103e28ba681f97bb5f8da5c003dd"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.727089 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqx9c" event={"ID":"79ac09ea-96de-4c66-b3ba-f41bf6993859","Type":"ContainerStarted","Data":"31fb42bad07676fe2aa159fc88e229073d55d9d09cbb7d2172450538e6297ba9"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.729154 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" event={"ID":"48a08b5e-7769-4af5-a383-36f041c2fc9d","Type":"ContainerStarted","Data":"f72f3e2d5b75c85c7bf73dfee935e4c3b568320ee5e7422bccb9713d80075ed9"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.729192 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" event={"ID":"48a08b5e-7769-4af5-a383-36f041c2fc9d","Type":"ContainerStarted","Data":"bf9b8038b9ebc1473c795e09d20bd360927d5748cb38d489ab37e5fec88efcdc"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.735437 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" event={"ID":"c3ec9879-a2a4-429f-af70-796e8246fce9","Type":"ContainerStarted","Data":"8dee0793e46a1c6152475b300e16da3e79fbb60f1e483394272e1eea5eb03bbe"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.735475 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" event={"ID":"c3ec9879-a2a4-429f-af70-796e8246fce9","Type":"ContainerStarted","Data":"7fd8a298c911a1af2055ea634f7f65b0243c4c89e951c5dbc708b1f0067ac008"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.739561 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4260-account-create-update-bknw6" event={"ID":"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e","Type":"ContainerStarted","Data":"aca736f26971ab9a07a672df46a4d4e5cb8e87a66ab2f4d83540069f8fb0b68f"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.739887 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4260-account-create-update-bknw6" event={"ID":"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e","Type":"ContainerStarted","Data":"8e4b103adf94ac49d2b54692f221453b7885ce8f84252483e33f002e2946650e"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.750772 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r2l8" event={"ID":"7846c19e-8dd8-4362-8324-3f23e257f4f4","Type":"ContainerStarted","Data":"3d9c470293808fd7b20c8198d2d095cb9890e2e281b2fc3959f351c759a2cdbf"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.750816 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r2l8" event={"ID":"7846c19e-8dd8-4362-8324-3f23e257f4f4","Type":"ContainerStarted","Data":"dca93980336b2dee5687f13887126fab40bae0131261fe79b2fcdb9fef5063db"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.756517 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ndnqn" event={"ID":"64534f44-ea87-4c15-a1a6-a9c2e8b799dd","Type":"ContainerStarted","Data":"15eb3630e4adaeb16c0460571c800ff6be2b8ffeb350c9374ab914e1ba540153"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.756573 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ndnqn" event={"ID":"64534f44-ea87-4c15-a1a6-a9c2e8b799dd","Type":"ContainerStarted","Data":"e88432417c23122181ca16c5b047929648c5cdc206f9808d981fee50f114e107"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.759286 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"7fc31461-7669-46dc-ab65-839d0dc6b753","Type":"ContainerStarted","Data":"0f3bcf7c2aea1463af4b46b977ce1dbf300ce4d63b39b10558eea60a246f9f8d"} Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.761238 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-588bd888d5-jbdss" podStartSLOduration=8.761214983 podStartE2EDuration="8.761214983s" podCreationTimestamp="2026-02-16 15:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.742663848 +0000 UTC m=+1323.434332887" watchObservedRunningTime="2026-02-16 15:14:57.761214983 +0000 UTC m=+1323.452884022" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.765965 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" podStartSLOduration=4.765946489 podStartE2EDuration="4.765946489s" podCreationTimestamp="2026-02-16 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.763792576 +0000 UTC m=+1323.455461615" watchObservedRunningTime="2026-02-16 15:14:57.765946489 +0000 UTC m=+1323.457615528" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.793816 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-4260-account-create-update-bknw6" podStartSLOduration=4.793796871 podStartE2EDuration="4.793796871s" podCreationTimestamp="2026-02-16 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.7790502 +0000 UTC m=+1323.470719239" watchObservedRunningTime="2026-02-16 15:14:57.793796871 +0000 UTC m=+1323.485465900" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.813614 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" podStartSLOduration=4.813595776 podStartE2EDuration="4.813595776s" podCreationTimestamp="2026-02-16 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.793905324 +0000 UTC m=+1323.485574363" watchObservedRunningTime="2026-02-16 15:14:57.813595776 +0000 UTC m=+1323.505264815" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.821698 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-qqx9c" podStartSLOduration=5.821674544 podStartE2EDuration="5.821674544s" podCreationTimestamp="2026-02-16 15:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.811003982 +0000 UTC m=+1323.502673021" watchObservedRunningTime="2026-02-16 15:14:57.821674544 +0000 UTC m=+1323.513343583" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.847397 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-ndnqn" podStartSLOduration=4.847374783 podStartE2EDuration="4.847374783s" podCreationTimestamp="2026-02-16 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.831657968 +0000 UTC m=+1323.523327007" watchObservedRunningTime="2026-02-16 15:14:57.847374783 +0000 UTC m=+1323.539043822" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.895771 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-6r2l8" podStartSLOduration=4.895748158 podStartE2EDuration="4.895748158s" podCreationTimestamp="2026-02-16 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:14:57.845685182 +0000 UTC m=+1323.537354221" watchObservedRunningTime="2026-02-16 15:14:57.895748158 +0000 UTC m=+1323.587417187" Feb 16 15:14:57 crc kubenswrapper[4748]: I0216 15:14:57.910307 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.089606752 podStartE2EDuration="12.910289475s" podCreationTimestamp="2026-02-16 15:14:45 +0000 UTC" firstStartedPulling="2026-02-16 15:14:46.462792638 +0000 UTC m=+1312.154461677" lastFinishedPulling="2026-02-16 15:14:56.283475361 +0000 UTC m=+1321.975144400" observedRunningTime="2026-02-16 15:14:57.869451594 +0000 UTC m=+1323.561120633" watchObservedRunningTime="2026-02-16 15:14:57.910289475 +0000 UTC m=+1323.601958514" Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.770928 4748 generic.go:334] "Generic (PLEG): container finished" podID="c3ec9879-a2a4-429f-af70-796e8246fce9" containerID="8dee0793e46a1c6152475b300e16da3e79fbb60f1e483394272e1eea5eb03bbe" exitCode=0 Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.770974 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" event={"ID":"c3ec9879-a2a4-429f-af70-796e8246fce9","Type":"ContainerDied","Data":"8dee0793e46a1c6152475b300e16da3e79fbb60f1e483394272e1eea5eb03bbe"} Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.787064 4748 generic.go:334] "Generic (PLEG): container finished" podID="b0deb4ac-44ab-4427-a6d8-ec4dcc55981e" containerID="aca736f26971ab9a07a672df46a4d4e5cb8e87a66ab2f4d83540069f8fb0b68f" exitCode=0 Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.787183 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4260-account-create-update-bknw6" event={"ID":"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e","Type":"ContainerDied","Data":"aca736f26971ab9a07a672df46a4d4e5cb8e87a66ab2f4d83540069f8fb0b68f"} Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.789648 4748 generic.go:334] "Generic (PLEG): container finished" podID="7846c19e-8dd8-4362-8324-3f23e257f4f4" containerID="3d9c470293808fd7b20c8198d2d095cb9890e2e281b2fc3959f351c759a2cdbf" exitCode=0 Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.789749 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r2l8" event={"ID":"7846c19e-8dd8-4362-8324-3f23e257f4f4","Type":"ContainerDied","Data":"3d9c470293808fd7b20c8198d2d095cb9890e2e281b2fc3959f351c759a2cdbf"} Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.824210 4748 generic.go:334] "Generic (PLEG): container finished" podID="64534f44-ea87-4c15-a1a6-a9c2e8b799dd" containerID="15eb3630e4adaeb16c0460571c800ff6be2b8ffeb350c9374ab914e1ba540153" exitCode=0 Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.824332 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ndnqn" event={"ID":"64534f44-ea87-4c15-a1a6-a9c2e8b799dd","Type":"ContainerDied","Data":"15eb3630e4adaeb16c0460571c800ff6be2b8ffeb350c9374ab914e1ba540153"} Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.826624 4748 generic.go:334] "Generic (PLEG): container finished" podID="79ac09ea-96de-4c66-b3ba-f41bf6993859" containerID="f78504620b513404600edd5aa1ccbb5d5305103e28ba681f97bb5f8da5c003dd" exitCode=0 Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.826689 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqx9c" event={"ID":"79ac09ea-96de-4c66-b3ba-f41bf6993859","Type":"ContainerDied","Data":"f78504620b513404600edd5aa1ccbb5d5305103e28ba681f97bb5f8da5c003dd"} Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.830121 4748 generic.go:334] "Generic (PLEG): container finished" podID="48a08b5e-7769-4af5-a383-36f041c2fc9d" containerID="f72f3e2d5b75c85c7bf73dfee935e4c3b568320ee5e7422bccb9713d80075ed9" exitCode=0 Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.832442 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" event={"ID":"48a08b5e-7769-4af5-a383-36f041c2fc9d","Type":"ContainerDied","Data":"f72f3e2d5b75c85c7bf73dfee935e4c3b568320ee5e7422bccb9713d80075ed9"} Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.883199 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6b9448587-thszr" Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.953322 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85b968f78-p2mst"] Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.953593 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85b968f78-p2mst" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-api" containerID="cri-o://3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb" gracePeriod=30 Feb 16 15:14:58 crc kubenswrapper[4748]: I0216 15:14:58.954237 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-85b968f78-p2mst" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-httpd" containerID="cri-o://d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596" gracePeriod=30 Feb 16 15:14:59 crc kubenswrapper[4748]: I0216 15:14:59.841924 4748 generic.go:334] "Generic (PLEG): container finished" podID="5828c48d-11da-4512-9c8d-75a789082601" containerID="d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596" exitCode=0 Feb 16 15:14:59 crc kubenswrapper[4748]: I0216 15:14:59.842259 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85b968f78-p2mst" event={"ID":"5828c48d-11da-4512-9c8d-75a789082601","Type":"ContainerDied","Data":"d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596"} Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.146730 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6"] Feb 16 15:15:00 crc kubenswrapper[4748]: E0216 15:15:00.147617 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="proxy-httpd" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.147641 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="proxy-httpd" Feb 16 15:15:00 crc kubenswrapper[4748]: E0216 15:15:00.147661 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="sg-core" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.147669 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="sg-core" Feb 16 15:15:00 crc kubenswrapper[4748]: E0216 15:15:00.147694 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-central-agent" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.147702 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-central-agent" Feb 16 15:15:00 crc kubenswrapper[4748]: E0216 15:15:00.147731 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-notification-agent" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.147741 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-notification-agent" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.147981 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="proxy-httpd" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.147995 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-notification-agent" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.148017 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="sg-core" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.148046 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" containerName="ceilometer-central-agent" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.148893 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.151151 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.151393 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.158552 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6"] Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.269917 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.307559 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a0cbe75-12eb-4085-b6f6-245a78c8b140-config-volume\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.307833 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hff9x\" (UniqueName: \"kubernetes.io/projected/8a0cbe75-12eb-4085-b6f6-245a78c8b140-kube-api-access-hff9x\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.307996 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a0cbe75-12eb-4085-b6f6-245a78c8b140-secret-volume\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.410981 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b0deb4ac-44ab-4427-a6d8-ec4dcc55981e" (UID: "b0deb4ac-44ab-4427-a6d8-ec4dcc55981e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.411032 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-operator-scripts\") pod \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.411134 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbkcc\" (UniqueName: \"kubernetes.io/projected/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-kube-api-access-rbkcc\") pod \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\" (UID: \"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.411565 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a0cbe75-12eb-4085-b6f6-245a78c8b140-config-volume\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.411616 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hff9x\" (UniqueName: \"kubernetes.io/projected/8a0cbe75-12eb-4085-b6f6-245a78c8b140-kube-api-access-hff9x\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.411681 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a0cbe75-12eb-4085-b6f6-245a78c8b140-secret-volume\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.412419 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a0cbe75-12eb-4085-b6f6-245a78c8b140-config-volume\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.415047 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.422116 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-kube-api-access-rbkcc" (OuterVolumeSpecName: "kube-api-access-rbkcc") pod "b0deb4ac-44ab-4427-a6d8-ec4dcc55981e" (UID: "b0deb4ac-44ab-4427-a6d8-ec4dcc55981e"). InnerVolumeSpecName "kube-api-access-rbkcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.422480 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a0cbe75-12eb-4085-b6f6-245a78c8b140-secret-volume\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.441182 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hff9x\" (UniqueName: \"kubernetes.io/projected/8a0cbe75-12eb-4085-b6f6-245a78c8b140-kube-api-access-hff9x\") pod \"collect-profiles-29520915-vflj6\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.517152 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbkcc\" (UniqueName: \"kubernetes.io/projected/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e-kube-api-access-rbkcc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.580342 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.604288 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.606530 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.645489 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.654683 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.726947 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ac09ea-96de-4c66-b3ba-f41bf6993859-operator-scripts\") pod \"79ac09ea-96de-4c66-b3ba-f41bf6993859\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.727235 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec9879-a2a4-429f-af70-796e8246fce9-operator-scripts\") pod \"c3ec9879-a2a4-429f-af70-796e8246fce9\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.727441 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-operator-scripts\") pod \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.727563 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gx7nn\" (UniqueName: \"kubernetes.io/projected/79ac09ea-96de-4c66-b3ba-f41bf6993859-kube-api-access-gx7nn\") pod \"79ac09ea-96de-4c66-b3ba-f41bf6993859\" (UID: \"79ac09ea-96de-4c66-b3ba-f41bf6993859\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.727659 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74qn5\" (UniqueName: \"kubernetes.io/projected/c3ec9879-a2a4-429f-af70-796e8246fce9-kube-api-access-74qn5\") pod \"c3ec9879-a2a4-429f-af70-796e8246fce9\" (UID: \"c3ec9879-a2a4-429f-af70-796e8246fce9\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.727774 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn6gg\" (UniqueName: \"kubernetes.io/projected/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-kube-api-access-bn6gg\") pod \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\" (UID: \"64534f44-ea87-4c15-a1a6-a9c2e8b799dd\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.728070 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7846c19e-8dd8-4362-8324-3f23e257f4f4-operator-scripts\") pod \"7846c19e-8dd8-4362-8324-3f23e257f4f4\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.728221 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b49h\" (UniqueName: \"kubernetes.io/projected/7846c19e-8dd8-4362-8324-3f23e257f4f4-kube-api-access-9b49h\") pod \"7846c19e-8dd8-4362-8324-3f23e257f4f4\" (UID: \"7846c19e-8dd8-4362-8324-3f23e257f4f4\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.727669 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79ac09ea-96de-4c66-b3ba-f41bf6993859-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79ac09ea-96de-4c66-b3ba-f41bf6993859" (UID: "79ac09ea-96de-4c66-b3ba-f41bf6993859"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.729615 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3ec9879-a2a4-429f-af70-796e8246fce9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c3ec9879-a2a4-429f-af70-796e8246fce9" (UID: "c3ec9879-a2a4-429f-af70-796e8246fce9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.730217 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "64534f44-ea87-4c15-a1a6-a9c2e8b799dd" (UID: "64534f44-ea87-4c15-a1a6-a9c2e8b799dd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.730532 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7846c19e-8dd8-4362-8324-3f23e257f4f4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7846c19e-8dd8-4362-8324-3f23e257f4f4" (UID: "7846c19e-8dd8-4362-8324-3f23e257f4f4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.731943 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7846c19e-8dd8-4362-8324-3f23e257f4f4-kube-api-access-9b49h" (OuterVolumeSpecName: "kube-api-access-9b49h") pod "7846c19e-8dd8-4362-8324-3f23e257f4f4" (UID: "7846c19e-8dd8-4362-8324-3f23e257f4f4"). InnerVolumeSpecName "kube-api-access-9b49h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.734858 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ec9879-a2a4-429f-af70-796e8246fce9-kube-api-access-74qn5" (OuterVolumeSpecName: "kube-api-access-74qn5") pod "c3ec9879-a2a4-429f-af70-796e8246fce9" (UID: "c3ec9879-a2a4-429f-af70-796e8246fce9"). InnerVolumeSpecName "kube-api-access-74qn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.734965 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79ac09ea-96de-4c66-b3ba-f41bf6993859-kube-api-access-gx7nn" (OuterVolumeSpecName: "kube-api-access-gx7nn") pod "79ac09ea-96de-4c66-b3ba-f41bf6993859" (UID: "79ac09ea-96de-4c66-b3ba-f41bf6993859"). InnerVolumeSpecName "kube-api-access-gx7nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.735040 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-kube-api-access-bn6gg" (OuterVolumeSpecName: "kube-api-access-bn6gg") pod "64534f44-ea87-4c15-a1a6-a9c2e8b799dd" (UID: "64534f44-ea87-4c15-a1a6-a9c2e8b799dd"). InnerVolumeSpecName "kube-api-access-bn6gg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.738259 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.831002 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvtgj\" (UniqueName: \"kubernetes.io/projected/48a08b5e-7769-4af5-a383-36f041c2fc9d-kube-api-access-dvtgj\") pod \"48a08b5e-7769-4af5-a383-36f041c2fc9d\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.831606 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a08b5e-7769-4af5-a383-36f041c2fc9d-operator-scripts\") pod \"48a08b5e-7769-4af5-a383-36f041c2fc9d\" (UID: \"48a08b5e-7769-4af5-a383-36f041c2fc9d\") " Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832242 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7846c19e-8dd8-4362-8324-3f23e257f4f4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832255 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b49h\" (UniqueName: \"kubernetes.io/projected/7846c19e-8dd8-4362-8324-3f23e257f4f4-kube-api-access-9b49h\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832264 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79ac09ea-96de-4c66-b3ba-f41bf6993859-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832274 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c3ec9879-a2a4-429f-af70-796e8246fce9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832282 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832290 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gx7nn\" (UniqueName: \"kubernetes.io/projected/79ac09ea-96de-4c66-b3ba-f41bf6993859-kube-api-access-gx7nn\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832298 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74qn5\" (UniqueName: \"kubernetes.io/projected/c3ec9879-a2a4-429f-af70-796e8246fce9-kube-api-access-74qn5\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832307 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn6gg\" (UniqueName: \"kubernetes.io/projected/64534f44-ea87-4c15-a1a6-a9c2e8b799dd-kube-api-access-bn6gg\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.832798 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48a08b5e-7769-4af5-a383-36f041c2fc9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48a08b5e-7769-4af5-a383-36f041c2fc9d" (UID: "48a08b5e-7769-4af5-a383-36f041c2fc9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.835833 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48a08b5e-7769-4af5-a383-36f041c2fc9d-kube-api-access-dvtgj" (OuterVolumeSpecName: "kube-api-access-dvtgj") pod "48a08b5e-7769-4af5-a383-36f041c2fc9d" (UID: "48a08b5e-7769-4af5-a383-36f041c2fc9d"). InnerVolumeSpecName "kube-api-access-dvtgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.859910 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.860821 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-5d6f-account-create-update-zt5xw" event={"ID":"c3ec9879-a2a4-429f-af70-796e8246fce9","Type":"ContainerDied","Data":"7fd8a298c911a1af2055ea634f7f65b0243c4c89e951c5dbc708b1f0067ac008"} Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.860857 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fd8a298c911a1af2055ea634f7f65b0243c4c89e951c5dbc708b1f0067ac008" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.876046 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-4260-account-create-update-bknw6" event={"ID":"b0deb4ac-44ab-4427-a6d8-ec4dcc55981e","Type":"ContainerDied","Data":"8e4b103adf94ac49d2b54692f221453b7885ce8f84252483e33f002e2946650e"} Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.876078 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e4b103adf94ac49d2b54692f221453b7885ce8f84252483e33f002e2946650e" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.876163 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-4260-account-create-update-bknw6" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.895106 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-6r2l8" event={"ID":"7846c19e-8dd8-4362-8324-3f23e257f4f4","Type":"ContainerDied","Data":"dca93980336b2dee5687f13887126fab40bae0131261fe79b2fcdb9fef5063db"} Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.895165 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dca93980336b2dee5687f13887126fab40bae0131261fe79b2fcdb9fef5063db" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.895509 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-6r2l8" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.900666 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-ndnqn" event={"ID":"64534f44-ea87-4c15-a1a6-a9c2e8b799dd","Type":"ContainerDied","Data":"e88432417c23122181ca16c5b047929648c5cdc206f9808d981fee50f114e107"} Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.901507 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-ndnqn" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.903260 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-qqx9c" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.905677 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e88432417c23122181ca16c5b047929648c5cdc206f9808d981fee50f114e107" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.905736 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-qqx9c" event={"ID":"79ac09ea-96de-4c66-b3ba-f41bf6993859","Type":"ContainerDied","Data":"31fb42bad07676fe2aa159fc88e229073d55d9d09cbb7d2172450538e6297ba9"} Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.905812 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31fb42bad07676fe2aa159fc88e229073d55d9d09cbb7d2172450538e6297ba9" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.907287 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" event={"ID":"48a08b5e-7769-4af5-a383-36f041c2fc9d","Type":"ContainerDied","Data":"bf9b8038b9ebc1473c795e09d20bd360927d5748cb38d489ab37e5fec88efcdc"} Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.907578 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-6005-account-create-update-rrbfq" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.908002 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf9b8038b9ebc1473c795e09d20bd360927d5748cb38d489ab37e5fec88efcdc" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.936142 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvtgj\" (UniqueName: \"kubernetes.io/projected/48a08b5e-7769-4af5-a383-36f041c2fc9d-kube-api-access-dvtgj\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:00 crc kubenswrapper[4748]: I0216 15:15:00.936175 4748 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48a08b5e-7769-4af5-a383-36f041c2fc9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:01 crc kubenswrapper[4748]: I0216 15:15:01.201796 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6"] Feb 16 15:15:01 crc kubenswrapper[4748]: I0216 15:15:01.921183 4748 generic.go:334] "Generic (PLEG): container finished" podID="8a0cbe75-12eb-4085-b6f6-245a78c8b140" containerID="ac9a592803502bace08073fb0d3aceeb7fc79b7cb4b08d6674c87a0132ad9e31" exitCode=0 Feb 16 15:15:01 crc kubenswrapper[4748]: I0216 15:15:01.921296 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" event={"ID":"8a0cbe75-12eb-4085-b6f6-245a78c8b140","Type":"ContainerDied","Data":"ac9a592803502bace08073fb0d3aceeb7fc79b7cb4b08d6674c87a0132ad9e31"} Feb 16 15:15:01 crc kubenswrapper[4748]: I0216 15:15:01.921507 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" event={"ID":"8a0cbe75-12eb-4085-b6f6-245a78c8b140","Type":"ContainerStarted","Data":"785cb88f58121da7f386fd0830a6a27846bd7bea7139bef0921f115234ae74dd"} Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.752590 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.874900 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4mg8\" (UniqueName: \"kubernetes.io/projected/5828c48d-11da-4512-9c8d-75a789082601-kube-api-access-g4mg8\") pod \"5828c48d-11da-4512-9c8d-75a789082601\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.875318 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-combined-ca-bundle\") pod \"5828c48d-11da-4512-9c8d-75a789082601\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.875438 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-config\") pod \"5828c48d-11da-4512-9c8d-75a789082601\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.875633 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-ovndb-tls-certs\") pod \"5828c48d-11da-4512-9c8d-75a789082601\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.875758 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-httpd-config\") pod \"5828c48d-11da-4512-9c8d-75a789082601\" (UID: \"5828c48d-11da-4512-9c8d-75a789082601\") " Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.908207 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5828c48d-11da-4512-9c8d-75a789082601-kube-api-access-g4mg8" (OuterVolumeSpecName: "kube-api-access-g4mg8") pod "5828c48d-11da-4512-9c8d-75a789082601" (UID: "5828c48d-11da-4512-9c8d-75a789082601"). InnerVolumeSpecName "kube-api-access-g4mg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.928918 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5828c48d-11da-4512-9c8d-75a789082601" (UID: "5828c48d-11da-4512-9c8d-75a789082601"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.980085 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4mg8\" (UniqueName: \"kubernetes.io/projected/5828c48d-11da-4512-9c8d-75a789082601-kube-api-access-g4mg8\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:02 crc kubenswrapper[4748]: I0216 15:15:02.980312 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.021834 4748 generic.go:334] "Generic (PLEG): container finished" podID="5828c48d-11da-4512-9c8d-75a789082601" containerID="3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb" exitCode=0 Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.022208 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-85b968f78-p2mst" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.036041 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85b968f78-p2mst" event={"ID":"5828c48d-11da-4512-9c8d-75a789082601","Type":"ContainerDied","Data":"3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb"} Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.036088 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-85b968f78-p2mst" event={"ID":"5828c48d-11da-4512-9c8d-75a789082601","Type":"ContainerDied","Data":"d1b6b153d971b37db47bc054b3e8c5301a012509c71deb7f157e0890c03218a1"} Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.036105 4748 scope.go:117] "RemoveContainer" containerID="d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.052104 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-config" (OuterVolumeSpecName: "config") pod "5828c48d-11da-4512-9c8d-75a789082601" (UID: "5828c48d-11da-4512-9c8d-75a789082601"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.059313 4748 scope.go:117] "RemoveContainer" containerID="3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.059931 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5828c48d-11da-4512-9c8d-75a789082601" (UID: "5828c48d-11da-4512-9c8d-75a789082601"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.077827 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5828c48d-11da-4512-9c8d-75a789082601" (UID: "5828c48d-11da-4512-9c8d-75a789082601"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.078389 4748 scope.go:117] "RemoveContainer" containerID="d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.079296 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596\": container with ID starting with d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596 not found: ID does not exist" containerID="d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.079388 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596"} err="failed to get container status \"d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596\": rpc error: code = NotFound desc = could not find container \"d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596\": container with ID starting with d46dd5e7d4b6353c8c646429798bb3ab4f05950ccbb0bd5a12693160a61f8596 not found: ID does not exist" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.079477 4748 scope.go:117] "RemoveContainer" containerID="3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.079730 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb\": container with ID starting with 3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb not found: ID does not exist" containerID="3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.080057 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb"} err="failed to get container status \"3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb\": rpc error: code = NotFound desc = could not find container \"3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb\": container with ID starting with 3c27423400f61bec35abc16795b1e781fdf36171797c313c6cdc78abb05dd8bb not found: ID does not exist" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.082899 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.083514 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.083696 4748 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5828c48d-11da-4512-9c8d-75a789082601-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.427307 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.458882 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-85b968f78-p2mst"] Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.473690 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-85b968f78-p2mst"] Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.595326 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hff9x\" (UniqueName: \"kubernetes.io/projected/8a0cbe75-12eb-4085-b6f6-245a78c8b140-kube-api-access-hff9x\") pod \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.595513 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a0cbe75-12eb-4085-b6f6-245a78c8b140-secret-volume\") pod \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.595560 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a0cbe75-12eb-4085-b6f6-245a78c8b140-config-volume\") pod \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\" (UID: \"8a0cbe75-12eb-4085-b6f6-245a78c8b140\") " Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.597016 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a0cbe75-12eb-4085-b6f6-245a78c8b140-config-volume" (OuterVolumeSpecName: "config-volume") pod "8a0cbe75-12eb-4085-b6f6-245a78c8b140" (UID: "8a0cbe75-12eb-4085-b6f6-245a78c8b140"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.601672 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0cbe75-12eb-4085-b6f6-245a78c8b140-kube-api-access-hff9x" (OuterVolumeSpecName: "kube-api-access-hff9x") pod "8a0cbe75-12eb-4085-b6f6-245a78c8b140" (UID: "8a0cbe75-12eb-4085-b6f6-245a78c8b140"). InnerVolumeSpecName "kube-api-access-hff9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.609089 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a0cbe75-12eb-4085-b6f6-245a78c8b140-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8a0cbe75-12eb-4085-b6f6-245a78c8b140" (UID: "8a0cbe75-12eb-4085-b6f6-245a78c8b140"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653177 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g84x5"] Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653732 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48a08b5e-7769-4af5-a383-36f041c2fc9d" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653752 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="48a08b5e-7769-4af5-a383-36f041c2fc9d" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653781 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64534f44-ea87-4c15-a1a6-a9c2e8b799dd" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653791 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="64534f44-ea87-4c15-a1a6-a9c2e8b799dd" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653811 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-httpd" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653820 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-httpd" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653830 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7846c19e-8dd8-4362-8324-3f23e257f4f4" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653838 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="7846c19e-8dd8-4362-8324-3f23e257f4f4" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653851 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-api" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653859 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-api" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653875 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0deb4ac-44ab-4427-a6d8-ec4dcc55981e" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653883 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0deb4ac-44ab-4427-a6d8-ec4dcc55981e" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653894 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ec9879-a2a4-429f-af70-796e8246fce9" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653902 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ec9879-a2a4-429f-af70-796e8246fce9" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653927 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0cbe75-12eb-4085-b6f6-245a78c8b140" containerName="collect-profiles" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653936 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0cbe75-12eb-4085-b6f6-245a78c8b140" containerName="collect-profiles" Feb 16 15:15:03 crc kubenswrapper[4748]: E0216 15:15:03.653948 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79ac09ea-96de-4c66-b3ba-f41bf6993859" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.653956 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="79ac09ea-96de-4c66-b3ba-f41bf6993859" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654207 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-api" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654222 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ec9879-a2a4-429f-af70-796e8246fce9" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654239 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="79ac09ea-96de-4c66-b3ba-f41bf6993859" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654249 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="64534f44-ea87-4c15-a1a6-a9c2e8b799dd" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654264 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="7846c19e-8dd8-4362-8324-3f23e257f4f4" containerName="mariadb-database-create" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654281 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0cbe75-12eb-4085-b6f6-245a78c8b140" containerName="collect-profiles" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654301 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0deb4ac-44ab-4427-a6d8-ec4dcc55981e" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654314 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="5828c48d-11da-4512-9c8d-75a789082601" containerName="neutron-httpd" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.654327 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="48a08b5e-7769-4af5-a383-36f041c2fc9d" containerName="mariadb-account-create-update" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.655293 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.657183 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p6hbg" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.657464 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.660362 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.673829 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g84x5"] Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.698388 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hff9x\" (UniqueName: \"kubernetes.io/projected/8a0cbe75-12eb-4085-b6f6-245a78c8b140-kube-api-access-hff9x\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.698428 4748 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8a0cbe75-12eb-4085-b6f6-245a78c8b140-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.698440 4748 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8a0cbe75-12eb-4085-b6f6-245a78c8b140-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.800782 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.801039 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-scripts\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.801082 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jrt\" (UniqueName: \"kubernetes.io/projected/993b883d-8949-4e81-87a0-efed48d8dc55-kube-api-access-98jrt\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.801145 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-config-data\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.903027 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.903120 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-scripts\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.903924 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jrt\" (UniqueName: \"kubernetes.io/projected/993b883d-8949-4e81-87a0-efed48d8dc55-kube-api-access-98jrt\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.904065 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-config-data\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.908525 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.908906 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-config-data\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.914948 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-scripts\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:03 crc kubenswrapper[4748]: I0216 15:15:03.928935 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jrt\" (UniqueName: \"kubernetes.io/projected/993b883d-8949-4e81-87a0-efed48d8dc55-kube-api-access-98jrt\") pod \"nova-cell0-conductor-db-sync-g84x5\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:04 crc kubenswrapper[4748]: I0216 15:15:04.027278 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:04 crc kubenswrapper[4748]: I0216 15:15:04.033442 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" event={"ID":"8a0cbe75-12eb-4085-b6f6-245a78c8b140","Type":"ContainerDied","Data":"785cb88f58121da7f386fd0830a6a27846bd7bea7139bef0921f115234ae74dd"} Feb 16 15:15:04 crc kubenswrapper[4748]: I0216 15:15:04.033500 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="785cb88f58121da7f386fd0830a6a27846bd7bea7139bef0921f115234ae74dd" Feb 16 15:15:04 crc kubenswrapper[4748]: I0216 15:15:04.033460 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520915-vflj6" Feb 16 15:15:04 crc kubenswrapper[4748]: I0216 15:15:04.487497 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g84x5"] Feb 16 15:15:04 crc kubenswrapper[4748]: I0216 15:15:04.658916 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:15:04 crc kubenswrapper[4748]: I0216 15:15:04.670544 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-588bd888d5-jbdss" Feb 16 15:15:05 crc kubenswrapper[4748]: I0216 15:15:05.008570 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5828c48d-11da-4512-9c8d-75a789082601" path="/var/lib/kubelet/pods/5828c48d-11da-4512-9c8d-75a789082601/volumes" Feb 16 15:15:05 crc kubenswrapper[4748]: I0216 15:15:05.048288 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g84x5" event={"ID":"993b883d-8949-4e81-87a0-efed48d8dc55","Type":"ContainerStarted","Data":"9546fc01222ca08a9ca60347f4e94aea83fc4ad0850838dc19be3331834bb823"} Feb 16 15:15:07 crc kubenswrapper[4748]: I0216 15:15:07.532919 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:15:07 crc kubenswrapper[4748]: I0216 15:15:07.564173 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6b69f6f9cb-8v6bm" Feb 16 15:15:07 crc kubenswrapper[4748]: I0216 15:15:07.634790 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-55548d48fb-q5fcz"] Feb 16 15:15:07 crc kubenswrapper[4748]: I0216 15:15:07.635197 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-55548d48fb-q5fcz" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-log" containerID="cri-o://9c23b9e8b7678b473e739587720b0501416475bd073be069dc36caaa9725765f" gracePeriod=30 Feb 16 15:15:07 crc kubenswrapper[4748]: I0216 15:15:07.635394 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-55548d48fb-q5fcz" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-api" containerID="cri-o://e770cb4557633c64132dede327871bade81ff4f743e5a53a45f71a9761ae0b7a" gracePeriod=30 Feb 16 15:15:08 crc kubenswrapper[4748]: I0216 15:15:08.087859 4748 generic.go:334] "Generic (PLEG): container finished" podID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerID="9c23b9e8b7678b473e739587720b0501416475bd073be069dc36caaa9725765f" exitCode=143 Feb 16 15:15:08 crc kubenswrapper[4748]: I0216 15:15:08.087913 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55548d48fb-q5fcz" event={"ID":"2d1d6f88-2878-4893-a7de-e671f7e25ad9","Type":"ContainerDied","Data":"9c23b9e8b7678b473e739587720b0501416475bd073be069dc36caaa9725765f"} Feb 16 15:15:11 crc kubenswrapper[4748]: E0216 15:15:11.050361 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:15:12 crc kubenswrapper[4748]: I0216 15:15:12.202051 4748 generic.go:334] "Generic (PLEG): container finished" podID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerID="e770cb4557633c64132dede327871bade81ff4f743e5a53a45f71a9761ae0b7a" exitCode=0 Feb 16 15:15:12 crc kubenswrapper[4748]: I0216 15:15:12.202265 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55548d48fb-q5fcz" event={"ID":"2d1d6f88-2878-4893-a7de-e671f7e25ad9","Type":"ContainerDied","Data":"e770cb4557633c64132dede327871bade81ff4f743e5a53a45f71a9761ae0b7a"} Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.315302 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.446968 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-internal-tls-certs\") pod \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.447106 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-scripts\") pod \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.447185 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl5mm\" (UniqueName: \"kubernetes.io/projected/2d1d6f88-2878-4893-a7de-e671f7e25ad9-kube-api-access-nl5mm\") pod \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.447216 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-combined-ca-bundle\") pod \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.447238 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1d6f88-2878-4893-a7de-e671f7e25ad9-logs\") pod \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.447354 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-config-data\") pod \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.447420 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-public-tls-certs\") pod \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\" (UID: \"2d1d6f88-2878-4893-a7de-e671f7e25ad9\") " Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.448522 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1d6f88-2878-4893-a7de-e671f7e25ad9-logs" (OuterVolumeSpecName: "logs") pod "2d1d6f88-2878-4893-a7de-e671f7e25ad9" (UID: "2d1d6f88-2878-4893-a7de-e671f7e25ad9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.453649 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-scripts" (OuterVolumeSpecName: "scripts") pod "2d1d6f88-2878-4893-a7de-e671f7e25ad9" (UID: "2d1d6f88-2878-4893-a7de-e671f7e25ad9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.455212 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1d6f88-2878-4893-a7de-e671f7e25ad9-kube-api-access-nl5mm" (OuterVolumeSpecName: "kube-api-access-nl5mm") pod "2d1d6f88-2878-4893-a7de-e671f7e25ad9" (UID: "2d1d6f88-2878-4893-a7de-e671f7e25ad9"). InnerVolumeSpecName "kube-api-access-nl5mm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.505466 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-config-data" (OuterVolumeSpecName: "config-data") pod "2d1d6f88-2878-4893-a7de-e671f7e25ad9" (UID: "2d1d6f88-2878-4893-a7de-e671f7e25ad9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.523957 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d1d6f88-2878-4893-a7de-e671f7e25ad9" (UID: "2d1d6f88-2878-4893-a7de-e671f7e25ad9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.550609 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.550650 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl5mm\" (UniqueName: \"kubernetes.io/projected/2d1d6f88-2878-4893-a7de-e671f7e25ad9-kube-api-access-nl5mm\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.550665 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.550677 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d1d6f88-2878-4893-a7de-e671f7e25ad9-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.550686 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.570932 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2d1d6f88-2878-4893-a7de-e671f7e25ad9" (UID: "2d1d6f88-2878-4893-a7de-e671f7e25ad9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.572032 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2d1d6f88-2878-4893-a7de-e671f7e25ad9" (UID: "2d1d6f88-2878-4893-a7de-e671f7e25ad9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.656113 4748 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:14 crc kubenswrapper[4748]: I0216 15:15:14.656412 4748 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1d6f88-2878-4893-a7de-e671f7e25ad9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.259197 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g84x5" event={"ID":"993b883d-8949-4e81-87a0-efed48d8dc55","Type":"ContainerStarted","Data":"48255c30433af25753936841b402a13143179682ddc5aa13ffddf1996a4d7b3d"} Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.262469 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55548d48fb-q5fcz" event={"ID":"2d1d6f88-2878-4893-a7de-e671f7e25ad9","Type":"ContainerDied","Data":"da86eb5e43852667a6b5fbbd3779c52dc4236663ed27de533ecf87110f989cbd"} Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.262507 4748 scope.go:117] "RemoveContainer" containerID="e770cb4557633c64132dede327871bade81ff4f743e5a53a45f71a9761ae0b7a" Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.262623 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55548d48fb-q5fcz" Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.294199 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-g84x5" podStartSLOduration=2.7905548529999997 podStartE2EDuration="12.294177109s" podCreationTimestamp="2026-02-16 15:15:03 +0000 UTC" firstStartedPulling="2026-02-16 15:15:04.49158304 +0000 UTC m=+1330.183252089" lastFinishedPulling="2026-02-16 15:15:13.995205306 +0000 UTC m=+1339.686874345" observedRunningTime="2026-02-16 15:15:15.28116394 +0000 UTC m=+1340.972832989" watchObservedRunningTime="2026-02-16 15:15:15.294177109 +0000 UTC m=+1340.985846148" Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.309703 4748 scope.go:117] "RemoveContainer" containerID="9c23b9e8b7678b473e739587720b0501416475bd073be069dc36caaa9725765f" Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.318836 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-55548d48fb-q5fcz"] Feb 16 15:15:15 crc kubenswrapper[4748]: I0216 15:15:15.328899 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-55548d48fb-q5fcz"] Feb 16 15:15:17 crc kubenswrapper[4748]: I0216 15:15:17.006982 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" path="/var/lib/kubelet/pods/2d1d6f88-2878-4893-a7de-e671f7e25ad9/volumes" Feb 16 15:15:21 crc kubenswrapper[4748]: E0216 15:15:21.996890 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:15:23 crc kubenswrapper[4748]: I0216 15:15:23.445290 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:15:23 crc kubenswrapper[4748]: I0216 15:15:23.446109 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-log" containerID="cri-o://d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee" gracePeriod=30 Feb 16 15:15:23 crc kubenswrapper[4748]: I0216 15:15:23.446143 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-httpd" containerID="cri-o://3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92" gracePeriod=30 Feb 16 15:15:24 crc kubenswrapper[4748]: I0216 15:15:24.378779 4748 generic.go:334] "Generic (PLEG): container finished" podID="6e869859-8d55-4b07-90cf-6936061845a0" containerID="d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee" exitCode=143 Feb 16 15:15:24 crc kubenswrapper[4748]: I0216 15:15:24.378879 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e869859-8d55-4b07-90cf-6936061845a0","Type":"ContainerDied","Data":"d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee"} Feb 16 15:15:25 crc kubenswrapper[4748]: I0216 15:15:25.820920 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:15:25 crc kubenswrapper[4748]: I0216 15:15:25.821130 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-log" containerID="cri-o://42ff2e37f171cbbdb115d789d95c7d0a9d7f806fc398467233c037595dddbdc6" gracePeriod=30 Feb 16 15:15:25 crc kubenswrapper[4748]: I0216 15:15:25.821205 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-httpd" containerID="cri-o://61ef039aadfd8586ab9a914e4454b54661e24f1094dc63a4d50ac8b29fef7ecd" gracePeriod=30 Feb 16 15:15:26 crc kubenswrapper[4748]: I0216 15:15:26.399385 4748 generic.go:334] "Generic (PLEG): container finished" podID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerID="42ff2e37f171cbbdb115d789d95c7d0a9d7f806fc398467233c037595dddbdc6" exitCode=143 Feb 16 15:15:26 crc kubenswrapper[4748]: I0216 15:15:26.399689 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f72feb-de93-4fb2-a936-b1e69c347a7b","Type":"ContainerDied","Data":"42ff2e37f171cbbdb115d789d95c7d0a9d7f806fc398467233c037595dddbdc6"} Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.230331 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.306340 4748 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod27bb3fa2-44eb-471c-a426-77d77d572ebb"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod27bb3fa2-44eb-471c-a426-77d77d572ebb] : Timed out while waiting for systemd to remove kubepods-besteffort-pod27bb3fa2_44eb_471c_a426_77d77d572ebb.slice" Feb 16 15:15:27 crc kubenswrapper[4748]: E0216 15:15:27.306396 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod27bb3fa2-44eb-471c-a426-77d77d572ebb] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod27bb3fa2-44eb-471c-a426-77d77d572ebb] : Timed out while waiting for systemd to remove kubepods-besteffort-pod27bb3fa2_44eb_471c_a426_77d77d572ebb.slice" pod="openstack/ceilometer-0" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319367 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clf4s\" (UniqueName: \"kubernetes.io/projected/6e869859-8d55-4b07-90cf-6936061845a0-kube-api-access-clf4s\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319415 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-scripts\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319626 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319657 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-logs\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319812 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-config-data\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319841 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-combined-ca-bundle\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319912 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-public-tls-certs\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.319951 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-httpd-run\") pod \"6e869859-8d55-4b07-90cf-6936061845a0\" (UID: \"6e869859-8d55-4b07-90cf-6936061845a0\") " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.320681 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.332884 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e869859-8d55-4b07-90cf-6936061845a0-kube-api-access-clf4s" (OuterVolumeSpecName: "kube-api-access-clf4s") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "kube-api-access-clf4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.334816 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-logs" (OuterVolumeSpecName: "logs") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.337729 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-scripts" (OuterVolumeSpecName: "scripts") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.363283 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc" (OuterVolumeSpecName: "glance") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.367941 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.394867 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.411382 4748 generic.go:334] "Generic (PLEG): container finished" podID="993b883d-8949-4e81-87a0-efed48d8dc55" containerID="48255c30433af25753936841b402a13143179682ddc5aa13ffddf1996a4d7b3d" exitCode=0 Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.411458 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g84x5" event={"ID":"993b883d-8949-4e81-87a0-efed48d8dc55","Type":"ContainerDied","Data":"48255c30433af25753936841b402a13143179682ddc5aa13ffddf1996a4d7b3d"} Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.416802 4748 generic.go:334] "Generic (PLEG): container finished" podID="6e869859-8d55-4b07-90cf-6936061845a0" containerID="3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92" exitCode=0 Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.416885 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.418605 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e869859-8d55-4b07-90cf-6936061845a0","Type":"ContainerDied","Data":"3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92"} Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.418657 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6e869859-8d55-4b07-90cf-6936061845a0","Type":"ContainerDied","Data":"43ae2f4ff0eee4427ac918bc4319a9a394595efe61cb5b324f8cc2a0a177ba17"} Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.418675 4748 scope.go:117] "RemoveContainer" containerID="3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.422323 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.422963 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.423026 4748 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") on node \"crc\" " Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.423579 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.428479 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.428741 4748 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.428810 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6e869859-8d55-4b07-90cf-6936061845a0-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.428881 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clf4s\" (UniqueName: \"kubernetes.io/projected/6e869859-8d55-4b07-90cf-6936061845a0-kube-api-access-clf4s\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.439675 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-config-data" (OuterVolumeSpecName: "config-data") pod "6e869859-8d55-4b07-90cf-6936061845a0" (UID: "6e869859-8d55-4b07-90cf-6936061845a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.466893 4748 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.467548 4748 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc") on node "crc" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.505044 4748 scope.go:117] "RemoveContainer" containerID="d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.512958 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.530875 4748 reconciler_common.go:293] "Volume detached for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.530909 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e869859-8d55-4b07-90cf-6936061845a0-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.538099 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.539915 4748 scope.go:117] "RemoveContainer" containerID="3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92" Feb 16 15:15:27 crc kubenswrapper[4748]: E0216 15:15:27.540431 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92\": container with ID starting with 3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92 not found: ID does not exist" containerID="3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.540468 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92"} err="failed to get container status \"3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92\": rpc error: code = NotFound desc = could not find container \"3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92\": container with ID starting with 3d2d5a1368b3f3b69f81675b2f3a228b2de0d6695320d59e68b2fe287ed14a92 not found: ID does not exist" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.540496 4748 scope.go:117] "RemoveContainer" containerID="d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee" Feb 16 15:15:27 crc kubenswrapper[4748]: E0216 15:15:27.540748 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee\": container with ID starting with d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee not found: ID does not exist" containerID="d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.540777 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee"} err="failed to get container status \"d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee\": rpc error: code = NotFound desc = could not find container \"d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee\": container with ID starting with d1f89bb3c66a16574ab7391afe7987fef7904b3b09084e7237fd77063212c0ee not found: ID does not exist" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.562520 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: E0216 15:15:27.563007 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-log" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563024 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-log" Feb 16 15:15:27 crc kubenswrapper[4748]: E0216 15:15:27.563036 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-log" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563043 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-log" Feb 16 15:15:27 crc kubenswrapper[4748]: E0216 15:15:27.563091 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-httpd" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563099 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-httpd" Feb 16 15:15:27 crc kubenswrapper[4748]: E0216 15:15:27.563122 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-api" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563130 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-api" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563338 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-httpd" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563353 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-log" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563362 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e869859-8d55-4b07-90cf-6936061845a0" containerName="glance-log" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.563371 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d6f88-2878-4893-a7de-e671f7e25ad9" containerName="placement-api" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.565199 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.567426 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.567509 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.583105 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.734238 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.734298 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.734322 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.734429 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.734543 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-scripts\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.734775 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z29xs\" (UniqueName: \"kubernetes.io/projected/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-kube-api-access-z29xs\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.734867 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-config-data\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.759900 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.769808 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.799834 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.801681 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.824267 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.829829 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.830032 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.837329 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-scripts\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.837505 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z29xs\" (UniqueName: \"kubernetes.io/projected/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-kube-api-access-z29xs\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.837567 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-config-data\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.837651 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.837700 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.837745 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.837831 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.838532 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-run-httpd\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.838730 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-log-httpd\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.845272 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.864497 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-scripts\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.867607 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z29xs\" (UniqueName: \"kubernetes.io/projected/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-kube-api-access-z29xs\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.869618 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.870246 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-config-data\") pod \"ceilometer-0\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.891937 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940223 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940314 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42defba-7cb0-4599-bdcb-34df647a38ab-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940394 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-config-data\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940460 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940495 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57czw\" (UniqueName: \"kubernetes.io/projected/e42defba-7cb0-4599-bdcb-34df647a38ab-kube-api-access-57czw\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940524 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42defba-7cb0-4599-bdcb-34df647a38ab-logs\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940570 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:27 crc kubenswrapper[4748]: I0216 15:15:27.940611 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-scripts\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.041907 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.042234 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42defba-7cb0-4599-bdcb-34df647a38ab-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.042290 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-config-data\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.042349 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.042371 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57czw\" (UniqueName: \"kubernetes.io/projected/e42defba-7cb0-4599-bdcb-34df647a38ab-kube-api-access-57czw\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.042391 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42defba-7cb0-4599-bdcb-34df647a38ab-logs\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.042422 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.042444 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-scripts\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.043748 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e42defba-7cb0-4599-bdcb-34df647a38ab-logs\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.043802 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e42defba-7cb0-4599-bdcb-34df647a38ab-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.048680 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-scripts\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.049513 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.049604 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.049644 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d294a5b1f89c6c34c7d61f2a887a4dff0b1fa943cd0603a1d259c16f2f816998/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.051051 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.057055 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e42defba-7cb0-4599-bdcb-34df647a38ab-config-data\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.068477 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57czw\" (UniqueName: \"kubernetes.io/projected/e42defba-7cb0-4599-bdcb-34df647a38ab-kube-api-access-57czw\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.154680 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-39639982-ad08-4b12-9d2b-2cf56a60b8cc\") pod \"glance-default-external-api-0\" (UID: \"e42defba-7cb0-4599-bdcb-34df647a38ab\") " pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.162217 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.237190 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.410679 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.428981 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerStarted","Data":"5fcecfc92d4a8727a93a0328bcc70c219d4909bae09016a382dfb23aa44b83c7"} Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.752907 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.820112 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.967317 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-scripts\") pod \"993b883d-8949-4e81-87a0-efed48d8dc55\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.967986 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-config-data\") pod \"993b883d-8949-4e81-87a0-efed48d8dc55\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.968091 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-combined-ca-bundle\") pod \"993b883d-8949-4e81-87a0-efed48d8dc55\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.968229 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jrt\" (UniqueName: \"kubernetes.io/projected/993b883d-8949-4e81-87a0-efed48d8dc55-kube-api-access-98jrt\") pod \"993b883d-8949-4e81-87a0-efed48d8dc55\" (UID: \"993b883d-8949-4e81-87a0-efed48d8dc55\") " Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.973668 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-scripts" (OuterVolumeSpecName: "scripts") pod "993b883d-8949-4e81-87a0-efed48d8dc55" (UID: "993b883d-8949-4e81-87a0-efed48d8dc55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:28 crc kubenswrapper[4748]: I0216 15:15:28.973700 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993b883d-8949-4e81-87a0-efed48d8dc55-kube-api-access-98jrt" (OuterVolumeSpecName: "kube-api-access-98jrt") pod "993b883d-8949-4e81-87a0-efed48d8dc55" (UID: "993b883d-8949-4e81-87a0-efed48d8dc55"). InnerVolumeSpecName "kube-api-access-98jrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.002975 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-config-data" (OuterVolumeSpecName: "config-data") pod "993b883d-8949-4e81-87a0-efed48d8dc55" (UID: "993b883d-8949-4e81-87a0-efed48d8dc55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.009472 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27bb3fa2-44eb-471c-a426-77d77d572ebb" path="/var/lib/kubelet/pods/27bb3fa2-44eb-471c-a426-77d77d572ebb/volumes" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.010269 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e869859-8d55-4b07-90cf-6936061845a0" path="/var/lib/kubelet/pods/6e869859-8d55-4b07-90cf-6936061845a0/volumes" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.022194 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "993b883d-8949-4e81-87a0-efed48d8dc55" (UID: "993b883d-8949-4e81-87a0-efed48d8dc55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.093326 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jrt\" (UniqueName: \"kubernetes.io/projected/993b883d-8949-4e81-87a0-efed48d8dc55-kube-api-access-98jrt\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.093361 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.093370 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.093379 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/993b883d-8949-4e81-87a0-efed48d8dc55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.464895 4748 generic.go:334] "Generic (PLEG): container finished" podID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerID="61ef039aadfd8586ab9a914e4454b54661e24f1094dc63a4d50ac8b29fef7ecd" exitCode=0 Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.465072 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f72feb-de93-4fb2-a936-b1e69c347a7b","Type":"ContainerDied","Data":"61ef039aadfd8586ab9a914e4454b54661e24f1094dc63a4d50ac8b29fef7ecd"} Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.487620 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e42defba-7cb0-4599-bdcb-34df647a38ab","Type":"ContainerStarted","Data":"8396c4af1b12f4f6c2289cfa55bf01222c6f8d12a8fda4fd2089e0d71eb22948"} Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.491281 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-g84x5" event={"ID":"993b883d-8949-4e81-87a0-efed48d8dc55","Type":"ContainerDied","Data":"9546fc01222ca08a9ca60347f4e94aea83fc4ad0850838dc19be3331834bb823"} Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.491320 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9546fc01222ca08a9ca60347f4e94aea83fc4ad0850838dc19be3331834bb823" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.491393 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-g84x5" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.573959 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:15:29 crc kubenswrapper[4748]: E0216 15:15:29.574933 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993b883d-8949-4e81-87a0-efed48d8dc55" containerName="nova-cell0-conductor-db-sync" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.574958 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="993b883d-8949-4e81-87a0-efed48d8dc55" containerName="nova-cell0-conductor-db-sync" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.575258 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="993b883d-8949-4e81-87a0-efed48d8dc55" containerName="nova-cell0-conductor-db-sync" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.576313 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.582297 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.582528 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p6hbg" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.585274 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.719828 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d4f738-5922-4ccb-a771-33aeccf2264f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.722069 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snkzl\" (UniqueName: \"kubernetes.io/projected/66d4f738-5922-4ccb-a771-33aeccf2264f-kube-api-access-snkzl\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.722492 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d4f738-5922-4ccb-a771-33aeccf2264f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.773142 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.824080 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d4f738-5922-4ccb-a771-33aeccf2264f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.824142 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d4f738-5922-4ccb-a771-33aeccf2264f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.824251 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snkzl\" (UniqueName: \"kubernetes.io/projected/66d4f738-5922-4ccb-a771-33aeccf2264f-kube-api-access-snkzl\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.830072 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66d4f738-5922-4ccb-a771-33aeccf2264f-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.845058 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66d4f738-5922-4ccb-a771-33aeccf2264f-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.879462 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snkzl\" (UniqueName: \"kubernetes.io/projected/66d4f738-5922-4ccb-a771-33aeccf2264f-kube-api-access-snkzl\") pod \"nova-cell0-conductor-0\" (UID: \"66d4f738-5922-4ccb-a771-33aeccf2264f\") " pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.925509 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-logs\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.925588 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-config-data\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.925638 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-scripts\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.925753 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-httpd-run\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.925917 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.926039 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-combined-ca-bundle\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.926122 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-internal-tls-certs\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.926149 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42f65\" (UniqueName: \"kubernetes.io/projected/d1f72feb-de93-4fb2-a936-b1e69c347a7b-kube-api-access-42f65\") pod \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\" (UID: \"d1f72feb-de93-4fb2-a936-b1e69c347a7b\") " Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.926264 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.926447 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-logs" (OuterVolumeSpecName: "logs") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.926888 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.926919 4748 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1f72feb-de93-4fb2-a936-b1e69c347a7b-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.931940 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-scripts" (OuterVolumeSpecName: "scripts") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.944948 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f72feb-de93-4fb2-a936-b1e69c347a7b-kube-api-access-42f65" (OuterVolumeSpecName: "kube-api-access-42f65") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "kube-api-access-42f65". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.953149 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f" (OuterVolumeSpecName: "glance") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.962506 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:29 crc kubenswrapper[4748]: I0216 15:15:29.998891 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.011024 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.024608 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-config-data" (OuterVolumeSpecName: "config-data") pod "d1f72feb-de93-4fb2-a936-b1e69c347a7b" (UID: "d1f72feb-de93-4fb2-a936-b1e69c347a7b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.032096 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.032116 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.032144 4748 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") on node \"crc\" " Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.032154 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.032166 4748 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1f72feb-de93-4fb2-a936-b1e69c347a7b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.032175 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42f65\" (UniqueName: \"kubernetes.io/projected/d1f72feb-de93-4fb2-a936-b1e69c347a7b-kube-api-access-42f65\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.083062 4748 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.083548 4748 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f") on node "crc" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.134350 4748 reconciler_common.go:293] "Volume detached for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.528533 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 16 15:15:30 crc kubenswrapper[4748]: W0216 15:15:30.531037 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66d4f738_5922_4ccb_a771_33aeccf2264f.slice/crio-7dda09370bdbf07558cb06009822bff8ab107304737b036627a491e60ea6eadb WatchSource:0}: Error finding container 7dda09370bdbf07558cb06009822bff8ab107304737b036627a491e60ea6eadb: Status 404 returned error can't find the container with id 7dda09370bdbf07558cb06009822bff8ab107304737b036627a491e60ea6eadb Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.537733 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.538116 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d1f72feb-de93-4fb2-a936-b1e69c347a7b","Type":"ContainerDied","Data":"0feb7fb0a091645a3633d680f57cfafe9b4805fa4b46635de6c91772f2e08cc4"} Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.538170 4748 scope.go:117] "RemoveContainer" containerID="61ef039aadfd8586ab9a914e4454b54661e24f1094dc63a4d50ac8b29fef7ecd" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.558484 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e42defba-7cb0-4599-bdcb-34df647a38ab","Type":"ContainerStarted","Data":"6fc19a5e2f1318002fee0b0a1e7c3a350aff102a37f9ea10b36fb9c0f7829dd4"} Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.558653 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e42defba-7cb0-4599-bdcb-34df647a38ab","Type":"ContainerStarted","Data":"785cd9dd5a208a0e65f8436f4215d3c2680ba1ac3e87919a3c7c8e66050be28a"} Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.567831 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerStarted","Data":"2868a699845efa013f08c2bbced70e222c9c4fd131c940b9b2ba9c9f0803e3e6"} Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.568172 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerStarted","Data":"391797343a30e5f1af7c3cc18880f7104bcdcdfdcd5946f41a0e1639f9b87f02"} Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.604441 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.603336537 podStartE2EDuration="3.603336537s" podCreationTimestamp="2026-02-16 15:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:30.580228941 +0000 UTC m=+1356.271897990" watchObservedRunningTime="2026-02-16 15:15:30.603336537 +0000 UTC m=+1356.295005566" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.704363 4748 scope.go:117] "RemoveContainer" containerID="42ff2e37f171cbbdb115d789d95c7d0a9d7f806fc398467233c037595dddbdc6" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.725888 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.741956 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.761531 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:15:30 crc kubenswrapper[4748]: E0216 15:15:30.762103 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-httpd" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.762129 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-httpd" Feb 16 15:15:30 crc kubenswrapper[4748]: E0216 15:15:30.762158 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-log" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.762167 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-log" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.762422 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-log" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.762451 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" containerName="glance-httpd" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.763620 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.766402 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.766642 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.807095 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856040 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856107 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856152 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856191 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856224 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnqtg\" (UniqueName: \"kubernetes.io/projected/f5918057-501d-4f3a-8d34-759a39e28502-kube-api-access-cnqtg\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856253 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5918057-501d-4f3a-8d34-759a39e28502-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856269 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.856322 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5918057-501d-4f3a-8d34-759a39e28502-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.959037 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.959312 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.959486 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.959649 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.959817 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnqtg\" (UniqueName: \"kubernetes.io/projected/f5918057-501d-4f3a-8d34-759a39e28502-kube-api-access-cnqtg\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.959969 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5918057-501d-4f3a-8d34-759a39e28502-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.960071 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.960356 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5918057-501d-4f3a-8d34-759a39e28502-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.960833 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f5918057-501d-4f3a-8d34-759a39e28502-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.960980 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5918057-501d-4f3a-8d34-759a39e28502-logs\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.963731 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.963923 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.964581 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.965414 4748 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.965563 4748 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f58f00d51b81be0c55943aba0909dac7acd0e6134cb135d989bed8b6a75cb071/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.969040 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5918057-501d-4f3a-8d34-759a39e28502-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:30 crc kubenswrapper[4748]: I0216 15:15:30.987208 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnqtg\" (UniqueName: \"kubernetes.io/projected/f5918057-501d-4f3a-8d34-759a39e28502-kube-api-access-cnqtg\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.007795 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f72feb-de93-4fb2-a936-b1e69c347a7b" path="/var/lib/kubelet/pods/d1f72feb-de93-4fb2-a936-b1e69c347a7b/volumes" Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.015288 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f4d9592e-54ff-4e80-9c26-c47f260dfb9f\") pod \"glance-default-internal-api-0\" (UID: \"f5918057-501d-4f3a-8d34-759a39e28502\") " pod="openstack/glance-default-internal-api-0" Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.094141 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.580296 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerStarted","Data":"0b54e2c3f99e71510b5e29ec3d639cf43bb11d9d2cd6b803ed760fcf3601ed8f"} Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.582166 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66d4f738-5922-4ccb-a771-33aeccf2264f","Type":"ContainerStarted","Data":"b3aa8e55267d878dc465199933319c23e8e2361670cb290922f4a585edf45ef6"} Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.582211 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66d4f738-5922-4ccb-a771-33aeccf2264f","Type":"ContainerStarted","Data":"7dda09370bdbf07558cb06009822bff8ab107304737b036627a491e60ea6eadb"} Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.610263 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.610240324 podStartE2EDuration="2.610240324s" podCreationTimestamp="2026-02-16 15:15:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:31.596824826 +0000 UTC m=+1357.288493865" watchObservedRunningTime="2026-02-16 15:15:31.610240324 +0000 UTC m=+1357.301909363" Feb 16 15:15:31 crc kubenswrapper[4748]: W0216 15:15:31.755082 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5918057_501d_4f3a_8d34_759a39e28502.slice/crio-c25aadcdc67a19d63d4cdaaaadd29167664e725751d5ae18ff7c45bcdb34f64d WatchSource:0}: Error finding container c25aadcdc67a19d63d4cdaaaadd29167664e725751d5ae18ff7c45bcdb34f64d: Status 404 returned error can't find the container with id c25aadcdc67a19d63d4cdaaaadd29167664e725751d5ae18ff7c45bcdb34f64d Feb 16 15:15:31 crc kubenswrapper[4748]: I0216 15:15:31.756003 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 16 15:15:32 crc kubenswrapper[4748]: I0216 15:15:32.598081 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5918057-501d-4f3a-8d34-759a39e28502","Type":"ContainerStarted","Data":"2a75c81fcfd5e5ff9ee3a26399e69771cdaf598bb50db75ac5021371ee6d936a"} Feb 16 15:15:32 crc kubenswrapper[4748]: I0216 15:15:32.599733 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5918057-501d-4f3a-8d34-759a39e28502","Type":"ContainerStarted","Data":"c25aadcdc67a19d63d4cdaaaadd29167664e725751d5ae18ff7c45bcdb34f64d"} Feb 16 15:15:32 crc kubenswrapper[4748]: I0216 15:15:32.599832 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:33 crc kubenswrapper[4748]: I0216 15:15:33.612857 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f5918057-501d-4f3a-8d34-759a39e28502","Type":"ContainerStarted","Data":"22eeccfec26da64e03c37e1bc7d537d717c2ac9bc47f2f0935a0d31d12904975"} Feb 16 15:15:33 crc kubenswrapper[4748]: I0216 15:15:33.641702 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.6416824500000002 podStartE2EDuration="3.64168245s" podCreationTimestamp="2026-02-16 15:15:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:33.636165794 +0000 UTC m=+1359.327834843" watchObservedRunningTime="2026-02-16 15:15:33.64168245 +0000 UTC m=+1359.333351479" Feb 16 15:15:34 crc kubenswrapper[4748]: I0216 15:15:34.625176 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerStarted","Data":"e67ff66d8217ca39b545f0d490aa22c3c9be52e303db8c81b1732a962a5dded9"} Feb 16 15:15:34 crc kubenswrapper[4748]: I0216 15:15:34.625219 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-central-agent" containerID="cri-o://2868a699845efa013f08c2bbced70e222c9c4fd131c940b9b2ba9c9f0803e3e6" gracePeriod=30 Feb 16 15:15:34 crc kubenswrapper[4748]: I0216 15:15:34.625301 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="sg-core" containerID="cri-o://0b54e2c3f99e71510b5e29ec3d639cf43bb11d9d2cd6b803ed760fcf3601ed8f" gracePeriod=30 Feb 16 15:15:34 crc kubenswrapper[4748]: I0216 15:15:34.625330 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="proxy-httpd" containerID="cri-o://e67ff66d8217ca39b545f0d490aa22c3c9be52e303db8c81b1732a962a5dded9" gracePeriod=30 Feb 16 15:15:34 crc kubenswrapper[4748]: I0216 15:15:34.625344 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-notification-agent" containerID="cri-o://391797343a30e5f1af7c3cc18880f7104bcdcdfdcd5946f41a0e1639f9b87f02" gracePeriod=30 Feb 16 15:15:34 crc kubenswrapper[4748]: I0216 15:15:34.625776 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:15:34 crc kubenswrapper[4748]: I0216 15:15:34.664693 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.614619796 podStartE2EDuration="7.664673511s" podCreationTimestamp="2026-02-16 15:15:27 +0000 UTC" firstStartedPulling="2026-02-16 15:15:28.408848767 +0000 UTC m=+1354.100517806" lastFinishedPulling="2026-02-16 15:15:33.458902472 +0000 UTC m=+1359.150571521" observedRunningTime="2026-02-16 15:15:34.650985955 +0000 UTC m=+1360.342654994" watchObservedRunningTime="2026-02-16 15:15:34.664673511 +0000 UTC m=+1360.356342550" Feb 16 15:15:35 crc kubenswrapper[4748]: I0216 15:15:35.640986 4748 generic.go:334] "Generic (PLEG): container finished" podID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerID="e67ff66d8217ca39b545f0d490aa22c3c9be52e303db8c81b1732a962a5dded9" exitCode=0 Feb 16 15:15:35 crc kubenswrapper[4748]: I0216 15:15:35.641026 4748 generic.go:334] "Generic (PLEG): container finished" podID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerID="0b54e2c3f99e71510b5e29ec3d639cf43bb11d9d2cd6b803ed760fcf3601ed8f" exitCode=2 Feb 16 15:15:35 crc kubenswrapper[4748]: I0216 15:15:35.641040 4748 generic.go:334] "Generic (PLEG): container finished" podID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerID="391797343a30e5f1af7c3cc18880f7104bcdcdfdcd5946f41a0e1639f9b87f02" exitCode=0 Feb 16 15:15:35 crc kubenswrapper[4748]: I0216 15:15:35.641065 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerDied","Data":"e67ff66d8217ca39b545f0d490aa22c3c9be52e303db8c81b1732a962a5dded9"} Feb 16 15:15:35 crc kubenswrapper[4748]: I0216 15:15:35.641116 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerDied","Data":"0b54e2c3f99e71510b5e29ec3d639cf43bb11d9d2cd6b803ed760fcf3601ed8f"} Feb 16 15:15:35 crc kubenswrapper[4748]: I0216 15:15:35.641131 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerDied","Data":"391797343a30e5f1af7c3cc18880f7104bcdcdfdcd5946f41a0e1639f9b87f02"} Feb 16 15:15:35 crc kubenswrapper[4748]: E0216 15:15:35.996133 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:15:38 crc kubenswrapper[4748]: I0216 15:15:38.163785 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:15:38 crc kubenswrapper[4748]: I0216 15:15:38.164062 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 16 15:15:38 crc kubenswrapper[4748]: I0216 15:15:38.197599 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:15:38 crc kubenswrapper[4748]: I0216 15:15:38.221269 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 16 15:15:38 crc kubenswrapper[4748]: I0216 15:15:38.673092 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:15:38 crc kubenswrapper[4748]: I0216 15:15:38.673140 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.687955 4748 generic.go:334] "Generic (PLEG): container finished" podID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerID="2868a699845efa013f08c2bbced70e222c9c4fd131c940b9b2ba9c9f0803e3e6" exitCode=0 Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.688140 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerDied","Data":"2868a699845efa013f08c2bbced70e222c9c4fd131c940b9b2ba9c9f0803e3e6"} Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.688745 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed78aed2-31fc-4a37-8113-f5f09a4c06a1","Type":"ContainerDied","Data":"5fcecfc92d4a8727a93a0328bcc70c219d4909bae09016a382dfb23aa44b83c7"} Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.688762 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fcecfc92d4a8727a93a0328bcc70c219d4909bae09016a382dfb23aa44b83c7" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.778561 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.853169 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-run-httpd\") pod \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.853221 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z29xs\" (UniqueName: \"kubernetes.io/projected/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-kube-api-access-z29xs\") pod \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.853297 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-combined-ca-bundle\") pod \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.853404 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-sg-core-conf-yaml\") pod \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.853582 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-scripts\") pod \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.853647 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-config-data\") pod \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.853738 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-log-httpd\") pod \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\" (UID: \"ed78aed2-31fc-4a37-8113-f5f09a4c06a1\") " Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.854457 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed78aed2-31fc-4a37-8113-f5f09a4c06a1" (UID: "ed78aed2-31fc-4a37-8113-f5f09a4c06a1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.854618 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed78aed2-31fc-4a37-8113-f5f09a4c06a1" (UID: "ed78aed2-31fc-4a37-8113-f5f09a4c06a1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.872131 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-kube-api-access-z29xs" (OuterVolumeSpecName: "kube-api-access-z29xs") pod "ed78aed2-31fc-4a37-8113-f5f09a4c06a1" (UID: "ed78aed2-31fc-4a37-8113-f5f09a4c06a1"). InnerVolumeSpecName "kube-api-access-z29xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.872239 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-scripts" (OuterVolumeSpecName: "scripts") pod "ed78aed2-31fc-4a37-8113-f5f09a4c06a1" (UID: "ed78aed2-31fc-4a37-8113-f5f09a4c06a1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.882141 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed78aed2-31fc-4a37-8113-f5f09a4c06a1" (UID: "ed78aed2-31fc-4a37-8113-f5f09a4c06a1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.949502 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed78aed2-31fc-4a37-8113-f5f09a4c06a1" (UID: "ed78aed2-31fc-4a37-8113-f5f09a4c06a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.951158 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-config-data" (OuterVolumeSpecName: "config-data") pod "ed78aed2-31fc-4a37-8113-f5f09a4c06a1" (UID: "ed78aed2-31fc-4a37-8113-f5f09a4c06a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.955300 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.955329 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.955341 4748 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.955349 4748 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.955358 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z29xs\" (UniqueName: \"kubernetes.io/projected/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-kube-api-access-z29xs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.955369 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.955378 4748 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed78aed2-31fc-4a37-8113-f5f09a4c06a1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:39 crc kubenswrapper[4748]: I0216 15:15:39.991667 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.696829 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724001 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-92xs7"] Feb 16 15:15:40 crc kubenswrapper[4748]: E0216 15:15:40.724409 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-central-agent" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724426 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-central-agent" Feb 16 15:15:40 crc kubenswrapper[4748]: E0216 15:15:40.724442 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-notification-agent" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724451 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-notification-agent" Feb 16 15:15:40 crc kubenswrapper[4748]: E0216 15:15:40.724495 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="proxy-httpd" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724507 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="proxy-httpd" Feb 16 15:15:40 crc kubenswrapper[4748]: E0216 15:15:40.724522 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="sg-core" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724531 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="sg-core" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724763 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="proxy-httpd" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724789 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-central-agent" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724799 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="sg-core" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.724809 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" containerName="ceilometer-notification-agent" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.725489 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.733986 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.734062 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.737694 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-92xs7"] Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.781746 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.806117 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.806253 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.822111 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.831365 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.872109 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsts5\" (UniqueName: \"kubernetes.io/projected/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-kube-api-access-dsts5\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.872287 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.872317 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-scripts\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.872375 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-config-data\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.929931 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.934338 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.939755 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.943968 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.969416 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.973988 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsts5\" (UniqueName: \"kubernetes.io/projected/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-kube-api-access-dsts5\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.974155 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.974199 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-scripts\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.974248 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-config-data\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.979596 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-config-data\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:40 crc kubenswrapper[4748]: I0216 15:15:40.990411 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:40.994305 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-scripts\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:40.999429 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsts5\" (UniqueName: \"kubernetes.io/projected/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-kube-api-access-dsts5\") pod \"nova-cell0-cell-mapping-92xs7\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.029989 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed78aed2-31fc-4a37-8113-f5f09a4c06a1" path="/var/lib/kubelet/pods/ed78aed2-31fc-4a37-8113-f5f09a4c06a1/volumes" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.047597 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.056792 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.058595 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.064144 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.077742 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-config-data\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.078051 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.078088 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-run-httpd\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.078109 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssqck\" (UniqueName: \"kubernetes.io/projected/7fbec113-0eb9-46a2-88e5-561d2057cfd8-kube-api-access-ssqck\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.078134 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-scripts\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.078149 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-log-httpd\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.078291 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.097951 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.097998 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.147171 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.180369 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.180593 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-config-data\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.180734 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.189253 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv4h6\" (UniqueName: \"kubernetes.io/projected/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-kube-api-access-mv4h6\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.189587 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-config-data\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.189680 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.189852 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-run-httpd\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.189946 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssqck\" (UniqueName: \"kubernetes.io/projected/7fbec113-0eb9-46a2-88e5-561d2057cfd8-kube-api-access-ssqck\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.190051 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-scripts\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.190139 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-log-httpd\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.196412 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-log-httpd\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.199988 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-run-httpd\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.210235 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-scripts\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.210335 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.211599 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.214823 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-config-data\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.244782 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7zmfq"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.247512 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.248683 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.251978 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssqck\" (UniqueName: \"kubernetes.io/projected/7fbec113-0eb9-46a2-88e5-561d2057cfd8-kube-api-access-ssqck\") pod \"ceilometer-0\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.263443 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.292794 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-config-data\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.292837 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.292886 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv4h6\" (UniqueName: \"kubernetes.io/projected/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-kube-api-access-mv4h6\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.295064 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.302531 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.304251 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-config-data\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.316847 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.323002 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.329141 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.338450 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7zmfq"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.358639 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.368124 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv4h6\" (UniqueName: \"kubernetes.io/projected/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-kube-api-access-mv4h6\") pod \"nova-scheduler-0\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.379891 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.385242 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.396736 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.400826 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc91823-fe35-4153-9e45-c8f87d6ec75a-logs\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.401008 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcr5x\" (UniqueName: \"kubernetes.io/projected/c10476db-386a-45fe-8050-daa6daf02664-kube-api-access-dcr5x\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.401050 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r6gq\" (UniqueName: \"kubernetes.io/projected/6cc91823-fe35-4153-9e45-c8f87d6ec75a-kube-api-access-9r6gq\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.401071 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-catalog-content\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.401093 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-utilities\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.401156 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-config-data\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.401480 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.407859 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.437493 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.439762 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.444072 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.449953 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.498397 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-75hvp"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.501438 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.507861 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-75hvp"] Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509271 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-config-data\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509319 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5qxj\" (UniqueName: \"kubernetes.io/projected/26f08590-1f35-4ca4-b50d-5c342cde90a2-kube-api-access-z5qxj\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509366 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509417 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-config-data\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509479 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm5w5\" (UniqueName: \"kubernetes.io/projected/4b889dac-53d6-46c6-b4a5-585cbf3e4495-kube-api-access-tm5w5\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509517 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509556 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b889dac-53d6-46c6-b4a5-585cbf3e4495-logs\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509637 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc91823-fe35-4153-9e45-c8f87d6ec75a-logs\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509684 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.509730 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.511365 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc91823-fe35-4153-9e45-c8f87d6ec75a-logs\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.511365 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcr5x\" (UniqueName: \"kubernetes.io/projected/c10476db-386a-45fe-8050-daa6daf02664-kube-api-access-dcr5x\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.511542 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r6gq\" (UniqueName: \"kubernetes.io/projected/6cc91823-fe35-4153-9e45-c8f87d6ec75a-kube-api-access-9r6gq\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.511589 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-catalog-content\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.511632 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-utilities\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.512356 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-utilities\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.512650 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-catalog-content\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.521475 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-config-data\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.523326 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.547933 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r6gq\" (UniqueName: \"kubernetes.io/projected/6cc91823-fe35-4153-9e45-c8f87d6ec75a-kube-api-access-9r6gq\") pod \"nova-metadata-0\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.551288 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.563137 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcr5x\" (UniqueName: \"kubernetes.io/projected/c10476db-386a-45fe-8050-daa6daf02664-kube-api-access-dcr5x\") pod \"community-operators-7zmfq\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.597826 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613275 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk42k\" (UniqueName: \"kubernetes.io/projected/be837065-1402-43d8-a26a-b3997a11e226-kube-api-access-pk42k\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613344 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613399 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5qxj\" (UniqueName: \"kubernetes.io/projected/26f08590-1f35-4ca4-b50d-5c342cde90a2-kube-api-access-z5qxj\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613421 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613448 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-config-data\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613469 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613500 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-config\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613522 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm5w5\" (UniqueName: \"kubernetes.io/projected/4b889dac-53d6-46c6-b4a5-585cbf3e4495-kube-api-access-tm5w5\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613538 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613578 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-svc\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613596 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b889dac-53d6-46c6-b4a5-585cbf3e4495-logs\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.613656 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.614090 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.643416 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b889dac-53d6-46c6-b4a5-585cbf3e4495-logs\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.651448 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.651565 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.651916 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.655932 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-config-data\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.659205 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm5w5\" (UniqueName: \"kubernetes.io/projected/4b889dac-53d6-46c6-b4a5-585cbf3e4495-kube-api-access-tm5w5\") pod \"nova-api-0\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.659654 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5qxj\" (UniqueName: \"kubernetes.io/projected/26f08590-1f35-4ca4-b50d-5c342cde90a2-kube-api-access-z5qxj\") pod \"nova-cell1-novncproxy-0\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.724513 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pk42k\" (UniqueName: \"kubernetes.io/projected/be837065-1402-43d8-a26a-b3997a11e226-kube-api-access-pk42k\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.724858 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.725426 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.725647 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.727182 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.727270 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-config\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.727318 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.764790 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.765595 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-config\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.765971 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-svc\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.765979 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.766970 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-svc\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.796968 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.812840 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.837884 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.838304 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:41 crc kubenswrapper[4748]: I0216 15:15:41.844289 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pk42k\" (UniqueName: \"kubernetes.io/projected/be837065-1402-43d8-a26a-b3997a11e226-kube-api-access-pk42k\") pod \"dnsmasq-dns-757b4f8459-75hvp\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.037117 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-92xs7"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.094627 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.301695 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.320552 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.560107 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dmlnl"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.561863 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.570243 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.570262 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.575010 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dmlnl"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.588679 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7zmfq"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.607026 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-scripts\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.607079 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzvr4\" (UniqueName: \"kubernetes.io/projected/df36d33b-ccf3-49af-9696-058097245d94-kube-api-access-lzvr4\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.607107 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-config-data\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.607196 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.709139 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-scripts\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.709190 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzvr4\" (UniqueName: \"kubernetes.io/projected/df36d33b-ccf3-49af-9696-058097245d94-kube-api-access-lzvr4\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.709229 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-config-data\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.709384 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.722781 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-config-data\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.725138 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.727152 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-scripts\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.748249 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzvr4\" (UniqueName: \"kubernetes.io/projected/df36d33b-ccf3-49af-9696-058097245d94-kube-api-access-lzvr4\") pod \"nova-cell1-conductor-db-sync-dmlnl\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.816569 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.824951 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.914602 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.924323 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerStarted","Data":"895d047c74dfeb2e4bb5d3539e85c538c144b046fa7061b719fd89b074a8d8d6"} Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.932888 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac","Type":"ContainerStarted","Data":"2c305bd694a2c7dddcf21c78cfa8791402bbd5934505c194c2c471f0d8770721"} Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.963252 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7zmfq" event={"ID":"c10476db-386a-45fe-8050-daa6daf02664","Type":"ContainerStarted","Data":"c308689d99a25165dff98b16fb2b461e2c76f72379a689664133adc9a58e6eab"} Feb 16 15:15:42 crc kubenswrapper[4748]: I0216 15:15:42.992780 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-92xs7" event={"ID":"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5","Type":"ContainerStarted","Data":"239ec728cdf4a5026ec5788ea50f53712da233edcf31cccbae04e8e87c987c97"} Feb 16 15:15:43 crc kubenswrapper[4748]: I0216 15:15:43.100272 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:43 crc kubenswrapper[4748]: I0216 15:15:43.158266 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-75hvp"] Feb 16 15:15:43 crc kubenswrapper[4748]: I0216 15:15:43.585554 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dmlnl"] Feb 16 15:15:43 crc kubenswrapper[4748]: W0216 15:15:43.622884 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf36d33b_ccf3_49af_9696_058097245d94.slice/crio-fa5d7cb6aa214bf1f8ae34682f6c1648c4876c77624cea588b37ce51c106a401 WatchSource:0}: Error finding container fa5d7cb6aa214bf1f8ae34682f6c1648c4876c77624cea588b37ce51c106a401: Status 404 returned error can't find the container with id fa5d7cb6aa214bf1f8ae34682f6c1648c4876c77624cea588b37ce51c106a401 Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.011386 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" event={"ID":"df36d33b-ccf3-49af-9696-058097245d94","Type":"ContainerStarted","Data":"8f6a91d4525671bc185440cfb0767e0764e455ec99d2b31180e37db1f646fa16"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.011802 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" event={"ID":"df36d33b-ccf3-49af-9696-058097245d94","Type":"ContainerStarted","Data":"fa5d7cb6aa214bf1f8ae34682f6c1648c4876c77624cea588b37ce51c106a401"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.028520 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-92xs7" event={"ID":"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5","Type":"ContainerStarted","Data":"9aeba28c3a78c82fa72969d5da5e3579a507fd5eae8d1391b624eb6d510f9ed3"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.038190 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" podStartSLOduration=2.038171186 podStartE2EDuration="2.038171186s" podCreationTimestamp="2026-02-16 15:15:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:44.029737148 +0000 UTC m=+1369.721406187" watchObservedRunningTime="2026-02-16 15:15:44.038171186 +0000 UTC m=+1369.729840225" Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.040364 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerStarted","Data":"8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.049665 4748 generic.go:334] "Generic (PLEG): container finished" podID="be837065-1402-43d8-a26a-b3997a11e226" containerID="c7075cd713a5388fd01a246edfda53571d4105aa1b19b23f02d4934bd871765f" exitCode=0 Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.049755 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" event={"ID":"be837065-1402-43d8-a26a-b3997a11e226","Type":"ContainerDied","Data":"c7075cd713a5388fd01a246edfda53571d4105aa1b19b23f02d4934bd871765f"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.049788 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" event={"ID":"be837065-1402-43d8-a26a-b3997a11e226","Type":"ContainerStarted","Data":"3ebc1cbb98255d981d471612d2afd1fbcca0242f77c930de4bdc5efed7e5cca1"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.055008 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cc91823-fe35-4153-9e45-c8f87d6ec75a","Type":"ContainerStarted","Data":"ad3427368ec51134bfc397058ca1d22cc3b852ef5f2a8e13c824b4c012aabec0"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.057368 4748 generic.go:334] "Generic (PLEG): container finished" podID="c10476db-386a-45fe-8050-daa6daf02664" containerID="adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f" exitCode=0 Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.057415 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7zmfq" event={"ID":"c10476db-386a-45fe-8050-daa6daf02664","Type":"ContainerDied","Data":"adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.061388 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b889dac-53d6-46c6-b4a5-585cbf3e4495","Type":"ContainerStarted","Data":"e3f3255824b6163214bf8444939e94b7279e0d869627959b93adde0cb834f33b"} Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.071768 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-92xs7" podStartSLOduration=4.071750571 podStartE2EDuration="4.071750571s" podCreationTimestamp="2026-02-16 15:15:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:44.047092135 +0000 UTC m=+1369.738761174" watchObservedRunningTime="2026-02-16 15:15:44.071750571 +0000 UTC m=+1369.763419610" Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.075792 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.075817 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:15:44 crc kubenswrapper[4748]: I0216 15:15:44.076836 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26f08590-1f35-4ca4-b50d-5c342cde90a2","Type":"ContainerStarted","Data":"6c5bc258f83bc9a7bf664e017390e8cc3c971470ba54a9ae845fca090eebe2e1"} Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.102854 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerStarted","Data":"c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da"} Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.107882 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" event={"ID":"be837065-1402-43d8-a26a-b3997a11e226","Type":"ContainerStarted","Data":"033e37a7466be3463c27114d3ee521be2206cd1ee8fa34c1ddcd075797cd34fb"} Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.108459 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.151129 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" podStartSLOduration=4.151102709 podStartE2EDuration="4.151102709s" podCreationTimestamp="2026-02-16 15:15:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:45.13855119 +0000 UTC m=+1370.830220239" watchObservedRunningTime="2026-02-16 15:15:45.151102709 +0000 UTC m=+1370.842771758" Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.339089 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.349263 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.349377 4748 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.353213 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 16 15:15:45 crc kubenswrapper[4748]: I0216 15:15:45.457093 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.150538 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b889dac-53d6-46c6-b4a5-585cbf3e4495","Type":"ContainerStarted","Data":"96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.151128 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b889dac-53d6-46c6-b4a5-585cbf3e4495","Type":"ContainerStarted","Data":"2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.153441 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="26f08590-1f35-4ca4-b50d-5c342cde90a2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7" gracePeriod=30 Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.153474 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26f08590-1f35-4ca4-b50d-5c342cde90a2","Type":"ContainerStarted","Data":"d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.156782 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerStarted","Data":"b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.166998 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac","Type":"ContainerStarted","Data":"93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.177586 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-log" containerID="cri-o://44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e" gracePeriod=30 Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.177828 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cc91823-fe35-4153-9e45-c8f87d6ec75a","Type":"ContainerStarted","Data":"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.177859 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cc91823-fe35-4153-9e45-c8f87d6ec75a","Type":"ContainerStarted","Data":"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.177912 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-metadata" containerID="cri-o://c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63" gracePeriod=30 Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.184573 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.252241211 podStartE2EDuration="9.184552413s" podCreationTimestamp="2026-02-16 15:15:40 +0000 UTC" firstStartedPulling="2026-02-16 15:15:43.091748298 +0000 UTC m=+1368.783417337" lastFinishedPulling="2026-02-16 15:15:48.02405949 +0000 UTC m=+1373.715728539" observedRunningTime="2026-02-16 15:15:49.173739577 +0000 UTC m=+1374.865408616" watchObservedRunningTime="2026-02-16 15:15:49.184552413 +0000 UTC m=+1374.876221452" Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.188360 4748 generic.go:334] "Generic (PLEG): container finished" podID="c10476db-386a-45fe-8050-daa6daf02664" containerID="b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040" exitCode=0 Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.188584 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7zmfq" event={"ID":"c10476db-386a-45fe-8050-daa6daf02664","Type":"ContainerDied","Data":"b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040"} Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.209974 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.536878267 podStartE2EDuration="9.209949468s" podCreationTimestamp="2026-02-16 15:15:40 +0000 UTC" firstStartedPulling="2026-02-16 15:15:42.344152901 +0000 UTC m=+1368.035821940" lastFinishedPulling="2026-02-16 15:15:48.017224112 +0000 UTC m=+1373.708893141" observedRunningTime="2026-02-16 15:15:49.202118265 +0000 UTC m=+1374.893787304" watchObservedRunningTime="2026-02-16 15:15:49.209949468 +0000 UTC m=+1374.901618507" Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.223441 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.138953774 podStartE2EDuration="8.22342267s" podCreationTimestamp="2026-02-16 15:15:41 +0000 UTC" firstStartedPulling="2026-02-16 15:15:42.94387572 +0000 UTC m=+1368.635544759" lastFinishedPulling="2026-02-16 15:15:48.028344606 +0000 UTC m=+1373.720013655" observedRunningTime="2026-02-16 15:15:49.218006636 +0000 UTC m=+1374.909675675" watchObservedRunningTime="2026-02-16 15:15:49.22342267 +0000 UTC m=+1374.915091709" Feb 16 15:15:49 crc kubenswrapper[4748]: I0216 15:15:49.313203 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.253768868 podStartE2EDuration="9.313178287s" podCreationTimestamp="2026-02-16 15:15:40 +0000 UTC" firstStartedPulling="2026-02-16 15:15:42.944052215 +0000 UTC m=+1368.635721254" lastFinishedPulling="2026-02-16 15:15:48.003461634 +0000 UTC m=+1373.695130673" observedRunningTime="2026-02-16 15:15:49.270954039 +0000 UTC m=+1374.962623078" watchObservedRunningTime="2026-02-16 15:15:49.313178287 +0000 UTC m=+1375.004847336" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.197036 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.201810 4748 generic.go:334] "Generic (PLEG): container finished" podID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerID="c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63" exitCode=0 Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.201842 4748 generic.go:334] "Generic (PLEG): container finished" podID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerID="44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e" exitCode=143 Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.201888 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cc91823-fe35-4153-9e45-c8f87d6ec75a","Type":"ContainerDied","Data":"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63"} Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.201918 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cc91823-fe35-4153-9e45-c8f87d6ec75a","Type":"ContainerDied","Data":"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e"} Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.201930 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6cc91823-fe35-4153-9e45-c8f87d6ec75a","Type":"ContainerDied","Data":"ad3427368ec51134bfc397058ca1d22cc3b852ef5f2a8e13c824b4c012aabec0"} Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.201944 4748 scope.go:117] "RemoveContainer" containerID="c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.202081 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.212820 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7zmfq" event={"ID":"c10476db-386a-45fe-8050-daa6daf02664","Type":"ContainerStarted","Data":"91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad"} Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.260399 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7zmfq" podStartSLOduration=4.64336697 podStartE2EDuration="10.260378823s" podCreationTimestamp="2026-02-16 15:15:40 +0000 UTC" firstStartedPulling="2026-02-16 15:15:44.059638634 +0000 UTC m=+1369.751307673" lastFinishedPulling="2026-02-16 15:15:49.676650487 +0000 UTC m=+1375.368319526" observedRunningTime="2026-02-16 15:15:50.247906766 +0000 UTC m=+1375.939575805" watchObservedRunningTime="2026-02-16 15:15:50.260378823 +0000 UTC m=+1375.952047862" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.270940 4748 scope.go:117] "RemoveContainer" containerID="44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.312469 4748 scope.go:117] "RemoveContainer" containerID="c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63" Feb 16 15:15:50 crc kubenswrapper[4748]: E0216 15:15:50.313046 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63\": container with ID starting with c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63 not found: ID does not exist" containerID="c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.313078 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63"} err="failed to get container status \"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63\": rpc error: code = NotFound desc = could not find container \"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63\": container with ID starting with c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63 not found: ID does not exist" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.313102 4748 scope.go:117] "RemoveContainer" containerID="44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e" Feb 16 15:15:50 crc kubenswrapper[4748]: E0216 15:15:50.313483 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e\": container with ID starting with 44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e not found: ID does not exist" containerID="44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.313501 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e"} err="failed to get container status \"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e\": rpc error: code = NotFound desc = could not find container \"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e\": container with ID starting with 44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e not found: ID does not exist" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.313514 4748 scope.go:117] "RemoveContainer" containerID="c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.313885 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63"} err="failed to get container status \"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63\": rpc error: code = NotFound desc = could not find container \"c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63\": container with ID starting with c24e94c7043b2cfbc0ea410eb440f3100eb06f7461e3797e25f57b9c9de4cc63 not found: ID does not exist" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.313903 4748 scope.go:117] "RemoveContainer" containerID="44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.314804 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-config-data\") pod \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.314874 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-combined-ca-bundle\") pod \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.315039 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r6gq\" (UniqueName: \"kubernetes.io/projected/6cc91823-fe35-4153-9e45-c8f87d6ec75a-kube-api-access-9r6gq\") pod \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.315108 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc91823-fe35-4153-9e45-c8f87d6ec75a-logs\") pod \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\" (UID: \"6cc91823-fe35-4153-9e45-c8f87d6ec75a\") " Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.317369 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e"} err="failed to get container status \"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e\": rpc error: code = NotFound desc = could not find container \"44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e\": container with ID starting with 44bf55f772f734207a1c0a213dc39028f753e16610252590dcf86ad0081d687e not found: ID does not exist" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.317743 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6cc91823-fe35-4153-9e45-c8f87d6ec75a-logs" (OuterVolumeSpecName: "logs") pod "6cc91823-fe35-4153-9e45-c8f87d6ec75a" (UID: "6cc91823-fe35-4153-9e45-c8f87d6ec75a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.321349 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc91823-fe35-4153-9e45-c8f87d6ec75a-kube-api-access-9r6gq" (OuterVolumeSpecName: "kube-api-access-9r6gq") pod "6cc91823-fe35-4153-9e45-c8f87d6ec75a" (UID: "6cc91823-fe35-4153-9e45-c8f87d6ec75a"). InnerVolumeSpecName "kube-api-access-9r6gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.365833 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-config-data" (OuterVolumeSpecName: "config-data") pod "6cc91823-fe35-4153-9e45-c8f87d6ec75a" (UID: "6cc91823-fe35-4153-9e45-c8f87d6ec75a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.384812 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6cc91823-fe35-4153-9e45-c8f87d6ec75a" (UID: "6cc91823-fe35-4153-9e45-c8f87d6ec75a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.418039 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r6gq\" (UniqueName: \"kubernetes.io/projected/6cc91823-fe35-4153-9e45-c8f87d6ec75a-kube-api-access-9r6gq\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.418078 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6cc91823-fe35-4153-9e45-c8f87d6ec75a-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.418091 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.418103 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cc91823-fe35-4153-9e45-c8f87d6ec75a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.536354 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.552111 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.567922 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:50 crc kubenswrapper[4748]: E0216 15:15:50.568736 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-metadata" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.568762 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-metadata" Feb 16 15:15:50 crc kubenswrapper[4748]: E0216 15:15:50.568872 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-log" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.568885 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-log" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.569136 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-metadata" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.569171 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" containerName="nova-metadata-log" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.570562 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.572652 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.572798 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.582808 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.621412 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.621737 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e18d055-852c-4121-b43b-b397cd16fefd-logs\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.621758 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.621826 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf2lm\" (UniqueName: \"kubernetes.io/projected/1e18d055-852c-4121-b43b-b397cd16fefd-kube-api-access-pf2lm\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.621879 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-config-data\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.724242 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.724291 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e18d055-852c-4121-b43b-b397cd16fefd-logs\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.724317 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.724395 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf2lm\" (UniqueName: \"kubernetes.io/projected/1e18d055-852c-4121-b43b-b397cd16fefd-kube-api-access-pf2lm\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.724459 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-config-data\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.724869 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e18d055-852c-4121-b43b-b397cd16fefd-logs\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.728092 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.728376 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.729782 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-config-data\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.744805 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf2lm\" (UniqueName: \"kubernetes.io/projected/1e18d055-852c-4121-b43b-b397cd16fefd-kube-api-access-pf2lm\") pod \"nova-metadata-0\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: I0216 15:15:50.888660 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:50 crc kubenswrapper[4748]: E0216 15:15:50.997897 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.014859 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc91823-fe35-4153-9e45-c8f87d6ec75a" path="/var/lib/kubelet/pods/6cc91823-fe35-4153-9e45-c8f87d6ec75a/volumes" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.234125 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerStarted","Data":"cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944"} Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.236984 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.279837 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.631808142 podStartE2EDuration="11.279812717s" podCreationTimestamp="2026-02-16 15:15:40 +0000 UTC" firstStartedPulling="2026-02-16 15:15:42.352407844 +0000 UTC m=+1368.044076883" lastFinishedPulling="2026-02-16 15:15:50.000412409 +0000 UTC m=+1375.692081458" observedRunningTime="2026-02-16 15:15:51.276082995 +0000 UTC m=+1376.967759874" watchObservedRunningTime="2026-02-16 15:15:51.279812717 +0000 UTC m=+1376.971481756" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.448593 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:51 crc kubenswrapper[4748]: W0216 15:15:51.450761 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e18d055_852c_4121_b43b_b397cd16fefd.slice/crio-87c0bc04f233d968f39b70b4057682267d4f9d28609e7c803c0cf3d06e22d2b8 WatchSource:0}: Error finding container 87c0bc04f233d968f39b70b4057682267d4f9d28609e7c803c0cf3d06e22d2b8: Status 404 returned error can't find the container with id 87c0bc04f233d968f39b70b4057682267d4f9d28609e7c803c0cf3d06e22d2b8 Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.551882 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.551944 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.587942 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.599463 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.599494 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.726409 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.814958 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:15:51 crc kubenswrapper[4748]: I0216 15:15:51.815017 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.096933 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.188786 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-v94vm"] Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.189445 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" podUID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerName="dnsmasq-dns" containerID="cri-o://a3abffd4599c49844f84e2b5f01b2024c7775add7e218dae603c91ca42c667d8" gracePeriod=10 Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.295775 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e18d055-852c-4121-b43b-b397cd16fefd","Type":"ContainerStarted","Data":"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a"} Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.295815 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e18d055-852c-4121-b43b-b397cd16fefd","Type":"ContainerStarted","Data":"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d"} Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.295825 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e18d055-852c-4121-b43b-b397cd16fefd","Type":"ContainerStarted","Data":"87c0bc04f233d968f39b70b4057682267d4f9d28609e7c803c0cf3d06e22d2b8"} Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.315878 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.315852499 podStartE2EDuration="2.315852499s" podCreationTimestamp="2026-02-16 15:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:52.310684032 +0000 UTC m=+1378.002353071" watchObservedRunningTime="2026-02-16 15:15:52.315852499 +0000 UTC m=+1378.007521538" Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.389860 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.677415 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7zmfq" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="registry-server" probeResult="failure" output=< Feb 16 15:15:52 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:15:52 crc kubenswrapper[4748]: > Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.898960 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:15:52 crc kubenswrapper[4748]: I0216 15:15:52.899252 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.306882 4748 generic.go:334] "Generic (PLEG): container finished" podID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerID="a3abffd4599c49844f84e2b5f01b2024c7775add7e218dae603c91ca42c667d8" exitCode=0 Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.306928 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" event={"ID":"ff626f18-cf62-4e6d-8659-89370cb65f7f","Type":"ContainerDied","Data":"a3abffd4599c49844f84e2b5f01b2024c7775add7e218dae603c91ca42c667d8"} Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.306970 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" event={"ID":"ff626f18-cf62-4e6d-8659-89370cb65f7f","Type":"ContainerDied","Data":"c383e3008826f3e3ef6bf5ded1adfb50dd0d2184ee86726b896788e6e69008c6"} Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.306992 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c383e3008826f3e3ef6bf5ded1adfb50dd0d2184ee86726b896788e6e69008c6" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.310542 4748 generic.go:334] "Generic (PLEG): container finished" podID="b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" containerID="9aeba28c3a78c82fa72969d5da5e3579a507fd5eae8d1391b624eb6d510f9ed3" exitCode=0 Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.310661 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-92xs7" event={"ID":"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5","Type":"ContainerDied","Data":"9aeba28c3a78c82fa72969d5da5e3579a507fd5eae8d1391b624eb6d510f9ed3"} Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.407747 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.510367 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8htsc\" (UniqueName: \"kubernetes.io/projected/ff626f18-cf62-4e6d-8659-89370cb65f7f-kube-api-access-8htsc\") pod \"ff626f18-cf62-4e6d-8659-89370cb65f7f\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.510473 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-swift-storage-0\") pod \"ff626f18-cf62-4e6d-8659-89370cb65f7f\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.510600 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-nb\") pod \"ff626f18-cf62-4e6d-8659-89370cb65f7f\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.510703 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-sb\") pod \"ff626f18-cf62-4e6d-8659-89370cb65f7f\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.510761 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-svc\") pod \"ff626f18-cf62-4e6d-8659-89370cb65f7f\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.510805 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-config\") pod \"ff626f18-cf62-4e6d-8659-89370cb65f7f\" (UID: \"ff626f18-cf62-4e6d-8659-89370cb65f7f\") " Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.536110 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff626f18-cf62-4e6d-8659-89370cb65f7f-kube-api-access-8htsc" (OuterVolumeSpecName: "kube-api-access-8htsc") pod "ff626f18-cf62-4e6d-8659-89370cb65f7f" (UID: "ff626f18-cf62-4e6d-8659-89370cb65f7f"). InnerVolumeSpecName "kube-api-access-8htsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.585421 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ff626f18-cf62-4e6d-8659-89370cb65f7f" (UID: "ff626f18-cf62-4e6d-8659-89370cb65f7f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.612117 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ff626f18-cf62-4e6d-8659-89370cb65f7f" (UID: "ff626f18-cf62-4e6d-8659-89370cb65f7f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.613523 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8htsc\" (UniqueName: \"kubernetes.io/projected/ff626f18-cf62-4e6d-8659-89370cb65f7f-kube-api-access-8htsc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.613544 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.613555 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.619123 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-config" (OuterVolumeSpecName: "config") pod "ff626f18-cf62-4e6d-8659-89370cb65f7f" (UID: "ff626f18-cf62-4e6d-8659-89370cb65f7f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.652144 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ff626f18-cf62-4e6d-8659-89370cb65f7f" (UID: "ff626f18-cf62-4e6d-8659-89370cb65f7f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.661361 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ff626f18-cf62-4e6d-8659-89370cb65f7f" (UID: "ff626f18-cf62-4e6d-8659-89370cb65f7f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.715939 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.716000 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:53 crc kubenswrapper[4748]: I0216 15:15:53.716015 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff626f18-cf62-4e6d-8659-89370cb65f7f-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.323117 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-v94vm" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.369849 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-v94vm"] Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.381276 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-v94vm"] Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.793221 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.842974 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsts5\" (UniqueName: \"kubernetes.io/projected/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-kube-api-access-dsts5\") pod \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.843075 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-combined-ca-bundle\") pod \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.843144 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-config-data\") pod \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.843769 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-scripts\") pod \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\" (UID: \"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5\") " Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.848148 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-scripts" (OuterVolumeSpecName: "scripts") pod "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" (UID: "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.850421 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-kube-api-access-dsts5" (OuterVolumeSpecName: "kube-api-access-dsts5") pod "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" (UID: "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5"). InnerVolumeSpecName "kube-api-access-dsts5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.889817 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-config-data" (OuterVolumeSpecName: "config-data") pod "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" (UID: "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.895266 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" (UID: "b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.946233 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsts5\" (UniqueName: \"kubernetes.io/projected/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-kube-api-access-dsts5\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.946265 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.946277 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:54 crc kubenswrapper[4748]: I0216 15:15:54.946285 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.021547 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff626f18-cf62-4e6d-8659-89370cb65f7f" path="/var/lib/kubelet/pods/ff626f18-cf62-4e6d-8659-89370cb65f7f/volumes" Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.334135 4748 generic.go:334] "Generic (PLEG): container finished" podID="df36d33b-ccf3-49af-9696-058097245d94" containerID="8f6a91d4525671bc185440cfb0767e0764e455ec99d2b31180e37db1f646fa16" exitCode=0 Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.334222 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" event={"ID":"df36d33b-ccf3-49af-9696-058097245d94","Type":"ContainerDied","Data":"8f6a91d4525671bc185440cfb0767e0764e455ec99d2b31180e37db1f646fa16"} Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.336453 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-92xs7" event={"ID":"b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5","Type":"ContainerDied","Data":"239ec728cdf4a5026ec5788ea50f53712da233edcf31cccbae04e8e87c987c97"} Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.336481 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-92xs7" Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.336481 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="239ec728cdf4a5026ec5788ea50f53712da233edcf31cccbae04e8e87c987c97" Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.512760 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.513064 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-log" containerID="cri-o://2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.513229 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-api" containerID="cri-o://96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.530022 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.530341 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" containerName="nova-scheduler-scheduler" containerID="cri-o://93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.542143 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.542409 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-log" containerID="cri-o://d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.542504 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-metadata" containerID="cri-o://2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a" gracePeriod=30 Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.889569 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:15:55 crc kubenswrapper[4748]: I0216 15:15:55.889900 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.346668 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.348002 4748 generic.go:334] "Generic (PLEG): container finished" podID="1e18d055-852c-4121-b43b-b397cd16fefd" containerID="2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a" exitCode=0 Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.348023 4748 generic.go:334] "Generic (PLEG): container finished" podID="1e18d055-852c-4121-b43b-b397cd16fefd" containerID="d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d" exitCode=143 Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.348062 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e18d055-852c-4121-b43b-b397cd16fefd","Type":"ContainerDied","Data":"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a"} Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.348084 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e18d055-852c-4121-b43b-b397cd16fefd","Type":"ContainerDied","Data":"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d"} Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.348093 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1e18d055-852c-4121-b43b-b397cd16fefd","Type":"ContainerDied","Data":"87c0bc04f233d968f39b70b4057682267d4f9d28609e7c803c0cf3d06e22d2b8"} Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.348108 4748 scope.go:117] "RemoveContainer" containerID="2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.351442 4748 generic.go:334] "Generic (PLEG): container finished" podID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerID="2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc" exitCode=143 Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.351734 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b889dac-53d6-46c6-b4a5-585cbf3e4495","Type":"ContainerDied","Data":"2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc"} Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.383855 4748 scope.go:117] "RemoveContainer" containerID="d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.415610 4748 scope.go:117] "RemoveContainer" containerID="2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a" Feb 16 15:15:56 crc kubenswrapper[4748]: E0216 15:15:56.416216 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a\": container with ID starting with 2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a not found: ID does not exist" containerID="2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.416268 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a"} err="failed to get container status \"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a\": rpc error: code = NotFound desc = could not find container \"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a\": container with ID starting with 2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a not found: ID does not exist" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.416296 4748 scope.go:117] "RemoveContainer" containerID="d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d" Feb 16 15:15:56 crc kubenswrapper[4748]: E0216 15:15:56.416635 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d\": container with ID starting with d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d not found: ID does not exist" containerID="d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.416657 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d"} err="failed to get container status \"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d\": rpc error: code = NotFound desc = could not find container \"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d\": container with ID starting with d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d not found: ID does not exist" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.416680 4748 scope.go:117] "RemoveContainer" containerID="2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.417118 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a"} err="failed to get container status \"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a\": rpc error: code = NotFound desc = could not find container \"2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a\": container with ID starting with 2e99a00c60f239f6f87b4f601bed4d9ba862e1a09b13faa066b414203251ca9a not found: ID does not exist" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.417140 4748 scope.go:117] "RemoveContainer" containerID="d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.417366 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d"} err="failed to get container status \"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d\": rpc error: code = NotFound desc = could not find container \"d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d\": container with ID starting with d5c83f0e28cac14da5e2194a87f1af97cd5ce8fbdde7f3932f2ccd24d09ecf2d not found: ID does not exist" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.484618 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-nova-metadata-tls-certs\") pod \"1e18d055-852c-4121-b43b-b397cd16fefd\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.484802 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e18d055-852c-4121-b43b-b397cd16fefd-logs\") pod \"1e18d055-852c-4121-b43b-b397cd16fefd\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.484925 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-config-data\") pod \"1e18d055-852c-4121-b43b-b397cd16fefd\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.484947 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf2lm\" (UniqueName: \"kubernetes.io/projected/1e18d055-852c-4121-b43b-b397cd16fefd-kube-api-access-pf2lm\") pod \"1e18d055-852c-4121-b43b-b397cd16fefd\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.485136 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e18d055-852c-4121-b43b-b397cd16fefd-logs" (OuterVolumeSpecName: "logs") pod "1e18d055-852c-4121-b43b-b397cd16fefd" (UID: "1e18d055-852c-4121-b43b-b397cd16fefd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.485278 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-combined-ca-bundle\") pod \"1e18d055-852c-4121-b43b-b397cd16fefd\" (UID: \"1e18d055-852c-4121-b43b-b397cd16fefd\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.486140 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e18d055-852c-4121-b43b-b397cd16fefd-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.501893 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e18d055-852c-4121-b43b-b397cd16fefd-kube-api-access-pf2lm" (OuterVolumeSpecName: "kube-api-access-pf2lm") pod "1e18d055-852c-4121-b43b-b397cd16fefd" (UID: "1e18d055-852c-4121-b43b-b397cd16fefd"). InnerVolumeSpecName "kube-api-access-pf2lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.521255 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-config-data" (OuterVolumeSpecName: "config-data") pod "1e18d055-852c-4121-b43b-b397cd16fefd" (UID: "1e18d055-852c-4121-b43b-b397cd16fefd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.539277 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e18d055-852c-4121-b43b-b397cd16fefd" (UID: "1e18d055-852c-4121-b43b-b397cd16fefd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.555942 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1e18d055-852c-4121-b43b-b397cd16fefd" (UID: "1e18d055-852c-4121-b43b-b397cd16fefd"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: E0216 15:15:56.560427 4748 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c is running failed: container process not found" containerID="93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:15:56 crc kubenswrapper[4748]: E0216 15:15:56.561280 4748 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c is running failed: container process not found" containerID="93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:15:56 crc kubenswrapper[4748]: E0216 15:15:56.561538 4748 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c is running failed: container process not found" containerID="93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:15:56 crc kubenswrapper[4748]: E0216 15:15:56.561569 4748 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" containerName="nova-scheduler-scheduler" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.588430 4748 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.588467 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.588477 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf2lm\" (UniqueName: \"kubernetes.io/projected/1e18d055-852c-4121-b43b-b397cd16fefd-kube-api-access-pf2lm\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.588485 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e18d055-852c-4121-b43b-b397cd16fefd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.867465 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.893961 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-config-data\") pod \"df36d33b-ccf3-49af-9696-058097245d94\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.894123 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzvr4\" (UniqueName: \"kubernetes.io/projected/df36d33b-ccf3-49af-9696-058097245d94-kube-api-access-lzvr4\") pod \"df36d33b-ccf3-49af-9696-058097245d94\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.894316 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-scripts\") pod \"df36d33b-ccf3-49af-9696-058097245d94\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.894350 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-combined-ca-bundle\") pod \"df36d33b-ccf3-49af-9696-058097245d94\" (UID: \"df36d33b-ccf3-49af-9696-058097245d94\") " Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.904844 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df36d33b-ccf3-49af-9696-058097245d94-kube-api-access-lzvr4" (OuterVolumeSpecName: "kube-api-access-lzvr4") pod "df36d33b-ccf3-49af-9696-058097245d94" (UID: "df36d33b-ccf3-49af-9696-058097245d94"). InnerVolumeSpecName "kube-api-access-lzvr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.905611 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-scripts" (OuterVolumeSpecName: "scripts") pod "df36d33b-ccf3-49af-9696-058097245d94" (UID: "df36d33b-ccf3-49af-9696-058097245d94"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.941788 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-config-data" (OuterVolumeSpecName: "config-data") pod "df36d33b-ccf3-49af-9696-058097245d94" (UID: "df36d33b-ccf3-49af-9696-058097245d94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.956206 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df36d33b-ccf3-49af-9696-058097245d94" (UID: "df36d33b-ccf3-49af-9696-058097245d94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.997678 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.997753 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.997770 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df36d33b-ccf3-49af-9696-058097245d94-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:56 crc kubenswrapper[4748]: I0216 15:15:56.997783 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzvr4\" (UniqueName: \"kubernetes.io/projected/df36d33b-ccf3-49af-9696-058097245d94-kube-api-access-lzvr4\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.020470 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.098564 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-config-data\") pod \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.098637 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv4h6\" (UniqueName: \"kubernetes.io/projected/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-kube-api-access-mv4h6\") pod \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.098812 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-combined-ca-bundle\") pod \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\" (UID: \"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac\") " Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.104079 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-kube-api-access-mv4h6" (OuterVolumeSpecName: "kube-api-access-mv4h6") pod "4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" (UID: "4c5b4e43-3f09-4abc-9efb-9edaf9be0fac"). InnerVolumeSpecName "kube-api-access-mv4h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.133174 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" (UID: "4c5b4e43-3f09-4abc-9efb-9edaf9be0fac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.136084 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-config-data" (OuterVolumeSpecName: "config-data") pod "4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" (UID: "4c5b4e43-3f09-4abc-9efb-9edaf9be0fac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.203123 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.203409 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv4h6\" (UniqueName: \"kubernetes.io/projected/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-kube-api-access-mv4h6\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.203419 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.365315 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.368138 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.368112 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-dmlnl" event={"ID":"df36d33b-ccf3-49af-9696-058097245d94","Type":"ContainerDied","Data":"fa5d7cb6aa214bf1f8ae34682f6c1648c4876c77624cea588b37ce51c106a401"} Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.368216 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa5d7cb6aa214bf1f8ae34682f6c1648c4876c77624cea588b37ce51c106a401" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.370307 4748 generic.go:334] "Generic (PLEG): container finished" podID="4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" containerID="93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" exitCode=0 Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.370342 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac","Type":"ContainerDied","Data":"93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c"} Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.370368 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4c5b4e43-3f09-4abc-9efb-9edaf9be0fac","Type":"ContainerDied","Data":"2c305bd694a2c7dddcf21c78cfa8791402bbd5934505c194c2c471f0d8770721"} Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.370368 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.370394 4748 scope.go:117] "RemoveContainer" containerID="93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.405984 4748 scope.go:117] "RemoveContainer" containerID="93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.406495 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c\": container with ID starting with 93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c not found: ID does not exist" containerID="93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.406537 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c"} err="failed to get container status \"93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c\": rpc error: code = NotFound desc = could not find container \"93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c\": container with ID starting with 93404ec47e054902843166a8469b680111a18546051fe3b3753632076284117c not found: ID does not exist" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.413599 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.426185 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.438463 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.439073 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" containerName="nova-scheduler-scheduler" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439099 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" containerName="nova-scheduler-scheduler" Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.439129 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerName="init" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439141 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerName="init" Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.439159 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-log" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439168 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-log" Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.439180 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-metadata" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439188 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-metadata" Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.439198 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df36d33b-ccf3-49af-9696-058097245d94" containerName="nova-cell1-conductor-db-sync" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439207 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="df36d33b-ccf3-49af-9696-058097245d94" containerName="nova-cell1-conductor-db-sync" Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.439234 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" containerName="nova-manage" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439242 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" containerName="nova-manage" Feb 16 15:15:57 crc kubenswrapper[4748]: E0216 15:15:57.439270 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerName="dnsmasq-dns" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439279 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerName="dnsmasq-dns" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439542 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-metadata" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439557 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff626f18-cf62-4e6d-8659-89370cb65f7f" containerName="dnsmasq-dns" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439572 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="df36d33b-ccf3-49af-9696-058097245d94" containerName="nova-cell1-conductor-db-sync" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439592 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" containerName="nova-scheduler-scheduler" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439606 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" containerName="nova-metadata-log" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.439616 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" containerName="nova-manage" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.441077 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.449620 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.450937 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.461418 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.461625 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.479863 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.498423 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.500126 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.505192 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.508013 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4793ab6a-d06f-4141-be39-25d49d9cd99d-logs\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.508064 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnjgr\" (UniqueName: \"kubernetes.io/projected/4793ab6a-d06f-4141-be39-25d49d9cd99d-kube-api-access-xnjgr\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.508110 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.508208 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-config-data\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.508229 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.508726 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.510114 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.513441 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.528417 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.545772 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.609762 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-config-data\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.609806 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.609872 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-config-data\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.609899 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.609926 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4793ab6a-d06f-4141-be39-25d49d9cd99d-logs\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.609953 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnjgr\" (UniqueName: \"kubernetes.io/projected/4793ab6a-d06f-4141-be39-25d49d9cd99d-kube-api-access-xnjgr\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.609976 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.610009 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.610036 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qzmj\" (UniqueName: \"kubernetes.io/projected/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-kube-api-access-2qzmj\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.610083 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.610109 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qxh6\" (UniqueName: \"kubernetes.io/projected/6f0209e1-f503-4d61-8205-d0d56a2f754e-kube-api-access-7qxh6\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.610435 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4793ab6a-d06f-4141-be39-25d49d9cd99d-logs\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.614634 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.614634 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-config-data\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.615214 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.624669 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnjgr\" (UniqueName: \"kubernetes.io/projected/4793ab6a-d06f-4141-be39-25d49d9cd99d-kube-api-access-xnjgr\") pod \"nova-metadata-0\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.712028 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.712108 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qzmj\" (UniqueName: \"kubernetes.io/projected/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-kube-api-access-2qzmj\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.712158 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.712186 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qxh6\" (UniqueName: \"kubernetes.io/projected/6f0209e1-f503-4d61-8205-d0d56a2f754e-kube-api-access-7qxh6\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.712257 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-config-data\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.712281 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.716268 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.717264 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.718146 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-config-data\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.719234 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.737100 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qzmj\" (UniqueName: \"kubernetes.io/projected/5a6aae8e-3f43-4da4-99a0-6342ae62e9c1-kube-api-access-2qzmj\") pod \"nova-cell1-conductor-0\" (UID: \"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1\") " pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.744356 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qxh6\" (UniqueName: \"kubernetes.io/projected/6f0209e1-f503-4d61-8205-d0d56a2f754e-kube-api-access-7qxh6\") pod \"nova-scheduler-0\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " pod="openstack/nova-scheduler-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.778545 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.825228 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:57 crc kubenswrapper[4748]: I0216 15:15:57.841577 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:15:58 crc kubenswrapper[4748]: I0216 15:15:58.299404 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:15:58 crc kubenswrapper[4748]: I0216 15:15:58.386976 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4793ab6a-d06f-4141-be39-25d49d9cd99d","Type":"ContainerStarted","Data":"4f036ba3ae1845d89449f2e0359801b591aaa2c09be3a62339a9fd922ce19a49"} Feb 16 15:15:58 crc kubenswrapper[4748]: I0216 15:15:58.418119 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 16 15:15:58 crc kubenswrapper[4748]: W0216 15:15:58.428892 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a6aae8e_3f43_4da4_99a0_6342ae62e9c1.slice/crio-a8bb04ac7ecd3696be3de64de2903ef011d17e7ed0c7e28fabe174d755329ebe WatchSource:0}: Error finding container a8bb04ac7ecd3696be3de64de2903ef011d17e7ed0c7e28fabe174d755329ebe: Status 404 returned error can't find the container with id a8bb04ac7ecd3696be3de64de2903ef011d17e7ed0c7e28fabe174d755329ebe Feb 16 15:15:58 crc kubenswrapper[4748]: I0216 15:15:58.572837 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:15:58 crc kubenswrapper[4748]: W0216 15:15:58.676694 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f0209e1_f503_4d61_8205_d0d56a2f754e.slice/crio-a0a81e1960c6d08dffc47db40c58d266517020109152446f20884a9ce504ffc0 WatchSource:0}: Error finding container a0a81e1960c6d08dffc47db40c58d266517020109152446f20884a9ce504ffc0: Status 404 returned error can't find the container with id a0a81e1960c6d08dffc47db40c58d266517020109152446f20884a9ce504ffc0 Feb 16 15:15:58 crc kubenswrapper[4748]: E0216 15:15:58.973795 4748 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b889dac_53d6_46c6_b4a5_585cbf3e4495.slice/crio-96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.016317 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e18d055-852c-4121-b43b-b397cd16fefd" path="/var/lib/kubelet/pods/1e18d055-852c-4121-b43b-b397cd16fefd/volumes" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.017557 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5b4e43-3f09-4abc-9efb-9edaf9be0fac" path="/var/lib/kubelet/pods/4c5b4e43-3f09-4abc-9efb-9edaf9be0fac/volumes" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.135072 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.247990 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b889dac-53d6-46c6-b4a5-585cbf3e4495-logs\") pod \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.248085 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-config-data\") pod \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.248120 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tm5w5\" (UniqueName: \"kubernetes.io/projected/4b889dac-53d6-46c6-b4a5-585cbf3e4495-kube-api-access-tm5w5\") pod \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.248217 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-combined-ca-bundle\") pod \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\" (UID: \"4b889dac-53d6-46c6-b4a5-585cbf3e4495\") " Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.248488 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b889dac-53d6-46c6-b4a5-585cbf3e4495-logs" (OuterVolumeSpecName: "logs") pod "4b889dac-53d6-46c6-b4a5-585cbf3e4495" (UID: "4b889dac-53d6-46c6-b4a5-585cbf3e4495"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.248832 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b889dac-53d6-46c6-b4a5-585cbf3e4495-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.257949 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b889dac-53d6-46c6-b4a5-585cbf3e4495-kube-api-access-tm5w5" (OuterVolumeSpecName: "kube-api-access-tm5w5") pod "4b889dac-53d6-46c6-b4a5-585cbf3e4495" (UID: "4b889dac-53d6-46c6-b4a5-585cbf3e4495"). InnerVolumeSpecName "kube-api-access-tm5w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.282633 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b889dac-53d6-46c6-b4a5-585cbf3e4495" (UID: "4b889dac-53d6-46c6-b4a5-585cbf3e4495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.284895 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-config-data" (OuterVolumeSpecName: "config-data") pod "4b889dac-53d6-46c6-b4a5-585cbf3e4495" (UID: "4b889dac-53d6-46c6-b4a5-585cbf3e4495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.351009 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.351053 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tm5w5\" (UniqueName: \"kubernetes.io/projected/4b889dac-53d6-46c6-b4a5-585cbf3e4495-kube-api-access-tm5w5\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.351067 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b889dac-53d6-46c6-b4a5-585cbf3e4495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.398921 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4793ab6a-d06f-4141-be39-25d49d9cd99d","Type":"ContainerStarted","Data":"abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.399776 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4793ab6a-d06f-4141-be39-25d49d9cd99d","Type":"ContainerStarted","Data":"b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.401973 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1","Type":"ContainerStarted","Data":"d4b0776c38549d85f8073813149bb79adccee6d9b935571e12bbfe0e9d081d7c"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.402069 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5a6aae8e-3f43-4da4-99a0-6342ae62e9c1","Type":"ContainerStarted","Data":"a8bb04ac7ecd3696be3de64de2903ef011d17e7ed0c7e28fabe174d755329ebe"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.402595 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.404738 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f0209e1-f503-4d61-8205-d0d56a2f754e","Type":"ContainerStarted","Data":"ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.404855 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f0209e1-f503-4d61-8205-d0d56a2f754e","Type":"ContainerStarted","Data":"a0a81e1960c6d08dffc47db40c58d266517020109152446f20884a9ce504ffc0"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.407359 4748 generic.go:334] "Generic (PLEG): container finished" podID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerID="96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef" exitCode=0 Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.407499 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b889dac-53d6-46c6-b4a5-585cbf3e4495","Type":"ContainerDied","Data":"96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.407599 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4b889dac-53d6-46c6-b4a5-585cbf3e4495","Type":"ContainerDied","Data":"e3f3255824b6163214bf8444939e94b7279e0d869627959b93adde0cb834f33b"} Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.407680 4748 scope.go:117] "RemoveContainer" containerID="96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.407885 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.439167 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.43915087 podStartE2EDuration="2.43915087s" podCreationTimestamp="2026-02-16 15:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:59.416425341 +0000 UTC m=+1385.108094380" watchObservedRunningTime="2026-02-16 15:15:59.43915087 +0000 UTC m=+1385.130819909" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.444063 4748 scope.go:117] "RemoveContainer" containerID="2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.445645 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.44563313 podStartE2EDuration="2.44563313s" podCreationTimestamp="2026-02-16 15:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:59.438153666 +0000 UTC m=+1385.129822705" watchObservedRunningTime="2026-02-16 15:15:59.44563313 +0000 UTC m=+1385.137302169" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.470558 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.470524232 podStartE2EDuration="2.470524232s" podCreationTimestamp="2026-02-16 15:15:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:15:59.456461676 +0000 UTC m=+1385.148130715" watchObservedRunningTime="2026-02-16 15:15:59.470524232 +0000 UTC m=+1385.162193271" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.481902 4748 scope.go:117] "RemoveContainer" containerID="96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef" Feb 16 15:15:59 crc kubenswrapper[4748]: E0216 15:15:59.485649 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef\": container with ID starting with 96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef not found: ID does not exist" containerID="96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.485702 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef"} err="failed to get container status \"96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef\": rpc error: code = NotFound desc = could not find container \"96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef\": container with ID starting with 96e8aa4f69c8fd9fd80c29cc3393025c56f416d193c08b3b4c5c028ee534d1ef not found: ID does not exist" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.485745 4748 scope.go:117] "RemoveContainer" containerID="2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc" Feb 16 15:15:59 crc kubenswrapper[4748]: E0216 15:15:59.486763 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc\": container with ID starting with 2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc not found: ID does not exist" containerID="2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.486820 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc"} err="failed to get container status \"2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc\": rpc error: code = NotFound desc = could not find container \"2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc\": container with ID starting with 2ce64af0fe3aa2c3314d38bbd73e29f374ac61f63599f4bcbdc81e751cf6aefc not found: ID does not exist" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.491994 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.501597 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.514957 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:59 crc kubenswrapper[4748]: E0216 15:15:59.515419 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-api" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.515436 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-api" Feb 16 15:15:59 crc kubenswrapper[4748]: E0216 15:15:59.515446 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-log" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.515453 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-log" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.515639 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-api" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.515663 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" containerName="nova-api-log" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.516805 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.521867 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.549307 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.563317 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.563411 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7grx\" (UniqueName: \"kubernetes.io/projected/04ec54af-7987-472f-82ea-f761231ca3f6-kube-api-access-s7grx\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.563470 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04ec54af-7987-472f-82ea-f761231ca3f6-logs\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.563505 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-config-data\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.666191 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.666283 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7grx\" (UniqueName: \"kubernetes.io/projected/04ec54af-7987-472f-82ea-f761231ca3f6-kube-api-access-s7grx\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.666338 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04ec54af-7987-472f-82ea-f761231ca3f6-logs\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.666377 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-config-data\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.667592 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04ec54af-7987-472f-82ea-f761231ca3f6-logs\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.670653 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.671495 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-config-data\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.682702 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7grx\" (UniqueName: \"kubernetes.io/projected/04ec54af-7987-472f-82ea-f761231ca3f6-kube-api-access-s7grx\") pod \"nova-api-0\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " pod="openstack/nova-api-0" Feb 16 15:15:59 crc kubenswrapper[4748]: I0216 15:15:59.834146 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:00 crc kubenswrapper[4748]: I0216 15:16:00.357034 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:00 crc kubenswrapper[4748]: W0216 15:16:00.362049 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04ec54af_7987_472f_82ea_f761231ca3f6.slice/crio-392406f6c35882ee8960c43ec99e4121e7526a91cd7a495416fcdb1ad4898e06 WatchSource:0}: Error finding container 392406f6c35882ee8960c43ec99e4121e7526a91cd7a495416fcdb1ad4898e06: Status 404 returned error can't find the container with id 392406f6c35882ee8960c43ec99e4121e7526a91cd7a495416fcdb1ad4898e06 Feb 16 15:16:00 crc kubenswrapper[4748]: I0216 15:16:00.434139 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04ec54af-7987-472f-82ea-f761231ca3f6","Type":"ContainerStarted","Data":"392406f6c35882ee8960c43ec99e4121e7526a91cd7a495416fcdb1ad4898e06"} Feb 16 15:16:01 crc kubenswrapper[4748]: I0216 15:16:01.005167 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b889dac-53d6-46c6-b4a5-585cbf3e4495" path="/var/lib/kubelet/pods/4b889dac-53d6-46c6-b4a5-585cbf3e4495/volumes" Feb 16 15:16:01 crc kubenswrapper[4748]: I0216 15:16:01.457257 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04ec54af-7987-472f-82ea-f761231ca3f6","Type":"ContainerStarted","Data":"9ab190eca756fb1134284e8497b0dd50afcb6ba22e914ee74352d9097e0fb4b7"} Feb 16 15:16:01 crc kubenswrapper[4748]: I0216 15:16:01.457299 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04ec54af-7987-472f-82ea-f761231ca3f6","Type":"ContainerStarted","Data":"4358ec7c551b9fe017be57357c84ab4d57be6cb933315ff3fa74e07536c46b8d"} Feb 16 15:16:01 crc kubenswrapper[4748]: I0216 15:16:01.483848 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.483823759 podStartE2EDuration="2.483823759s" podCreationTimestamp="2026-02-16 15:15:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:01.478616901 +0000 UTC m=+1387.170285950" watchObservedRunningTime="2026-02-16 15:16:01.483823759 +0000 UTC m=+1387.175492818" Feb 16 15:16:01 crc kubenswrapper[4748]: I0216 15:16:01.664916 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:16:01 crc kubenswrapper[4748]: I0216 15:16:01.728773 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:16:01 crc kubenswrapper[4748]: I0216 15:16:01.914519 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7zmfq"] Feb 16 15:16:02 crc kubenswrapper[4748]: I0216 15:16:02.779777 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:16:02 crc kubenswrapper[4748]: I0216 15:16:02.779841 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:16:02 crc kubenswrapper[4748]: I0216 15:16:02.843196 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:16:03 crc kubenswrapper[4748]: I0216 15:16:03.479456 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7zmfq" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="registry-server" containerID="cri-o://91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad" gracePeriod=2 Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.016839 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.167421 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-catalog-content\") pod \"c10476db-386a-45fe-8050-daa6daf02664\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.167577 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcr5x\" (UniqueName: \"kubernetes.io/projected/c10476db-386a-45fe-8050-daa6daf02664-kube-api-access-dcr5x\") pod \"c10476db-386a-45fe-8050-daa6daf02664\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.167805 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-utilities\") pod \"c10476db-386a-45fe-8050-daa6daf02664\" (UID: \"c10476db-386a-45fe-8050-daa6daf02664\") " Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.168735 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-utilities" (OuterVolumeSpecName: "utilities") pod "c10476db-386a-45fe-8050-daa6daf02664" (UID: "c10476db-386a-45fe-8050-daa6daf02664"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.172817 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10476db-386a-45fe-8050-daa6daf02664-kube-api-access-dcr5x" (OuterVolumeSpecName: "kube-api-access-dcr5x") pod "c10476db-386a-45fe-8050-daa6daf02664" (UID: "c10476db-386a-45fe-8050-daa6daf02664"). InnerVolumeSpecName "kube-api-access-dcr5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.214540 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c10476db-386a-45fe-8050-daa6daf02664" (UID: "c10476db-386a-45fe-8050-daa6daf02664"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.271277 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.271324 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcr5x\" (UniqueName: \"kubernetes.io/projected/c10476db-386a-45fe-8050-daa6daf02664-kube-api-access-dcr5x\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.271404 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c10476db-386a-45fe-8050-daa6daf02664-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.491964 4748 generic.go:334] "Generic (PLEG): container finished" podID="c10476db-386a-45fe-8050-daa6daf02664" containerID="91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad" exitCode=0 Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.492073 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7zmfq" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.492066 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7zmfq" event={"ID":"c10476db-386a-45fe-8050-daa6daf02664","Type":"ContainerDied","Data":"91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad"} Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.492465 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7zmfq" event={"ID":"c10476db-386a-45fe-8050-daa6daf02664","Type":"ContainerDied","Data":"c308689d99a25165dff98b16fb2b461e2c76f72379a689664133adc9a58e6eab"} Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.492488 4748 scope.go:117] "RemoveContainer" containerID="91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.536407 4748 scope.go:117] "RemoveContainer" containerID="b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.538510 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7zmfq"] Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.549861 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7zmfq"] Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.559901 4748 scope.go:117] "RemoveContainer" containerID="adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.613274 4748 scope.go:117] "RemoveContainer" containerID="91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad" Feb 16 15:16:04 crc kubenswrapper[4748]: E0216 15:16:04.613807 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad\": container with ID starting with 91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad not found: ID does not exist" containerID="91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.613856 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad"} err="failed to get container status \"91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad\": rpc error: code = NotFound desc = could not find container \"91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad\": container with ID starting with 91ea7416340257546863acb0961dd5b2ba05d6c55cb2ce8e8fc1146d7f723dad not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.613882 4748 scope.go:117] "RemoveContainer" containerID="b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040" Feb 16 15:16:04 crc kubenswrapper[4748]: E0216 15:16:04.614236 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040\": container with ID starting with b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040 not found: ID does not exist" containerID="b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.614299 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040"} err="failed to get container status \"b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040\": rpc error: code = NotFound desc = could not find container \"b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040\": container with ID starting with b6619383c4df6c6aa2ca44ef7ba2c48a2fd9b0e51d839baf3e87aae60dcfb040 not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.614329 4748 scope.go:117] "RemoveContainer" containerID="adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f" Feb 16 15:16:04 crc kubenswrapper[4748]: E0216 15:16:04.614686 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f\": container with ID starting with adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f not found: ID does not exist" containerID="adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.614759 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f"} err="failed to get container status \"adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f\": rpc error: code = NotFound desc = could not find container \"adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f\": container with ID starting with adfaba26a0dab8a573d6615d5cd7f8eeab8ef79f211b581aec83642bc2faac8f not found: ID does not exist" Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.747364 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:16:04 crc kubenswrapper[4748]: I0216 15:16:04.747589 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.012615 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10476db-386a-45fe-8050-daa6daf02664" path="/var/lib/kubelet/pods/c10476db-386a-45fe-8050-daa6daf02664/volumes" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.946863 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5l4k9"] Feb 16 15:16:05 crc kubenswrapper[4748]: E0216 15:16:05.947351 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="extract-utilities" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.947364 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="extract-utilities" Feb 16 15:16:05 crc kubenswrapper[4748]: E0216 15:16:05.947376 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="registry-server" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.947383 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="registry-server" Feb 16 15:16:05 crc kubenswrapper[4748]: E0216 15:16:05.947395 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="extract-content" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.947401 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="extract-content" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.947629 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10476db-386a-45fe-8050-daa6daf02664" containerName="registry-server" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.949151 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:05 crc kubenswrapper[4748]: I0216 15:16:05.962641 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5l4k9"] Feb 16 15:16:06 crc kubenswrapper[4748]: E0216 15:16:06.007042 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.015215 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n5gj\" (UniqueName: \"kubernetes.io/projected/052d4b89-506b-4911-9f30-30ffb4b0e7b3-kube-api-access-7n5gj\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.015332 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-catalog-content\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.015365 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-utilities\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.119897 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-utilities\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.120038 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n5gj\" (UniqueName: \"kubernetes.io/projected/052d4b89-506b-4911-9f30-30ffb4b0e7b3-kube-api-access-7n5gj\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.120118 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-catalog-content\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.120574 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-catalog-content\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.121896 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-utilities\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.159748 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n5gj\" (UniqueName: \"kubernetes.io/projected/052d4b89-506b-4911-9f30-30ffb4b0e7b3-kube-api-access-7n5gj\") pod \"redhat-operators-5l4k9\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.288528 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:06 crc kubenswrapper[4748]: I0216 15:16:06.838093 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5l4k9"] Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.524822 4748 generic.go:334] "Generic (PLEG): container finished" podID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerID="9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647" exitCode=0 Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.524916 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l4k9" event={"ID":"052d4b89-506b-4911-9f30-30ffb4b0e7b3","Type":"ContainerDied","Data":"9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647"} Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.525069 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l4k9" event={"ID":"052d4b89-506b-4911-9f30-30ffb4b0e7b3","Type":"ContainerStarted","Data":"49090529106ed0e3331424914f1790945f2fed75da4587141a6fdfb9fb146bed"} Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.779538 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.780901 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.842831 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.880574 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 16 15:16:07 crc kubenswrapper[4748]: I0216 15:16:07.880999 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:16:08 crc kubenswrapper[4748]: I0216 15:16:08.538520 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l4k9" event={"ID":"052d4b89-506b-4911-9f30-30ffb4b0e7b3","Type":"ContainerStarted","Data":"6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c"} Feb 16 15:16:08 crc kubenswrapper[4748]: I0216 15:16:08.586979 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:16:08 crc kubenswrapper[4748]: I0216 15:16:08.796954 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:08 crc kubenswrapper[4748]: I0216 15:16:08.796985 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:09 crc kubenswrapper[4748]: I0216 15:16:09.837076 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:16:09 crc kubenswrapper[4748]: I0216 15:16:09.837388 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:16:10 crc kubenswrapper[4748]: I0216 15:16:10.597979 4748 generic.go:334] "Generic (PLEG): container finished" podID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerID="6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c" exitCode=0 Feb 16 15:16:10 crc kubenswrapper[4748]: I0216 15:16:10.598036 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l4k9" event={"ID":"052d4b89-506b-4911-9f30-30ffb4b0e7b3","Type":"ContainerDied","Data":"6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c"} Feb 16 15:16:10 crc kubenswrapper[4748]: I0216 15:16:10.920973 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:10 crc kubenswrapper[4748]: I0216 15:16:10.920992 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.218:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:12 crc kubenswrapper[4748]: I0216 15:16:12.438567 4748 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 15:16:12 crc kubenswrapper[4748]: I0216 15:16:12.438674 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:12 crc kubenswrapper[4748]: I0216 15:16:12.679344 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:16:13 crc kubenswrapper[4748]: I0216 15:16:13.631062 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l4k9" event={"ID":"052d4b89-506b-4911-9f30-30ffb4b0e7b3","Type":"ContainerStarted","Data":"455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786"} Feb 16 15:16:13 crc kubenswrapper[4748]: I0216 15:16:13.663512 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5l4k9" podStartSLOduration=3.214166176 podStartE2EDuration="8.663491654s" podCreationTimestamp="2026-02-16 15:16:05 +0000 UTC" firstStartedPulling="2026-02-16 15:16:07.526500372 +0000 UTC m=+1393.218169421" lastFinishedPulling="2026-02-16 15:16:12.97582586 +0000 UTC m=+1398.667494899" observedRunningTime="2026-02-16 15:16:13.656604314 +0000 UTC m=+1399.348273353" watchObservedRunningTime="2026-02-16 15:16:13.663491654 +0000 UTC m=+1399.355160713" Feb 16 15:16:16 crc kubenswrapper[4748]: I0216 15:16:16.288967 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:16 crc kubenswrapper[4748]: I0216 15:16:16.289376 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:16 crc kubenswrapper[4748]: I0216 15:16:16.688733 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:16:16 crc kubenswrapper[4748]: I0216 15:16:16.689161 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" containerName="kube-state-metrics" containerID="cri-o://ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a" gracePeriod=30 Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.297285 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.361918 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5l4k9" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:16:17 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:16:17 crc kubenswrapper[4748]: > Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.387589 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cn79\" (UniqueName: \"kubernetes.io/projected/c06fd3c2-2bb4-40e8-8911-4f30daf28f43-kube-api-access-9cn79\") pod \"c06fd3c2-2bb4-40e8-8911-4f30daf28f43\" (UID: \"c06fd3c2-2bb4-40e8-8911-4f30daf28f43\") " Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.397953 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c06fd3c2-2bb4-40e8-8911-4f30daf28f43-kube-api-access-9cn79" (OuterVolumeSpecName: "kube-api-access-9cn79") pod "c06fd3c2-2bb4-40e8-8911-4f30daf28f43" (UID: "c06fd3c2-2bb4-40e8-8911-4f30daf28f43"). InnerVolumeSpecName "kube-api-access-9cn79". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.490349 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9cn79\" (UniqueName: \"kubernetes.io/projected/c06fd3c2-2bb4-40e8-8911-4f30daf28f43-kube-api-access-9cn79\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.670663 4748 generic.go:334] "Generic (PLEG): container finished" podID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" containerID="ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a" exitCode=2 Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.670733 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c06fd3c2-2bb4-40e8-8911-4f30daf28f43","Type":"ContainerDied","Data":"ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a"} Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.670763 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c06fd3c2-2bb4-40e8-8911-4f30daf28f43","Type":"ContainerDied","Data":"b14eddf0401524cb6eb65fcd35e485e527348f9d569e3a08c180b44a8b8b17e3"} Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.670787 4748 scope.go:117] "RemoveContainer" containerID="ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.670931 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.710969 4748 scope.go:117] "RemoveContainer" containerID="ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a" Feb 16 15:16:17 crc kubenswrapper[4748]: E0216 15:16:17.714282 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a\": container with ID starting with ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a not found: ID does not exist" containerID="ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.714320 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a"} err="failed to get container status \"ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a\": rpc error: code = NotFound desc = could not find container \"ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a\": container with ID starting with ddbc391e78b2481758fe3d5ea9fa5aea052abe355fa4f8e2a410a65b8d18298a not found: ID does not exist" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.719000 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.741475 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.753242 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:16:17 crc kubenswrapper[4748]: E0216 15:16:17.753776 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" containerName="kube-state-metrics" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.753794 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" containerName="kube-state-metrics" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.754000 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" containerName="kube-state-metrics" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.754816 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.761574 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.764201 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.766078 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.787692 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.796327 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.796401 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4qvx\" (UniqueName: \"kubernetes.io/projected/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-api-access-t4qvx\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.796481 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.796529 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.796809 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.803425 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.898591 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.898707 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.899095 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.899171 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4qvx\" (UniqueName: \"kubernetes.io/projected/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-api-access-t4qvx\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.908290 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.908933 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.910575 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e37e9a18-0d9c-4015-b877-06fc0dc9c908-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:17 crc kubenswrapper[4748]: I0216 15:16:17.923885 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4qvx\" (UniqueName: \"kubernetes.io/projected/e37e9a18-0d9c-4015-b877-06fc0dc9c908-kube-api-access-t4qvx\") pod \"kube-state-metrics-0\" (UID: \"e37e9a18-0d9c-4015-b877-06fc0dc9c908\") " pod="openstack/kube-state-metrics-0" Feb 16 15:16:18 crc kubenswrapper[4748]: I0216 15:16:18.084582 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 16 15:16:18 crc kubenswrapper[4748]: I0216 15:16:18.620247 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 16 15:16:18 crc kubenswrapper[4748]: I0216 15:16:18.682953 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e37e9a18-0d9c-4015-b877-06fc0dc9c908","Type":"ContainerStarted","Data":"466ba8188ec478b57755f2c91fcb8475a1ceacec4fc494af499a1982eb23d6fe"} Feb 16 15:16:18 crc kubenswrapper[4748]: I0216 15:16:18.691162 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.006847 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c06fd3c2-2bb4-40e8-8911-4f30daf28f43" path="/var/lib/kubelet/pods/c06fd3c2-2bb4-40e8-8911-4f30daf28f43/volumes" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.419211 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.419816 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-central-agent" containerID="cri-o://8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f" gracePeriod=30 Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.419879 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-notification-agent" containerID="cri-o://c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da" gracePeriod=30 Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.419941 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="sg-core" containerID="cri-o://b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36" gracePeriod=30 Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.419831 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="proxy-httpd" containerID="cri-o://cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944" gracePeriod=30 Feb 16 15:16:19 crc kubenswrapper[4748]: E0216 15:16:19.602387 4748 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fbec113_0eb9_46a2_88e5_561d2057cfd8.slice/crio-conmon-b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fbec113_0eb9_46a2_88e5_561d2057cfd8.slice/crio-cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.628021 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.693935 4748 generic.go:334] "Generic (PLEG): container finished" podID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerID="cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944" exitCode=0 Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.693969 4748 generic.go:334] "Generic (PLEG): container finished" podID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerID="b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36" exitCode=2 Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.694007 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerDied","Data":"cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944"} Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.694035 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerDied","Data":"b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36"} Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.695218 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e37e9a18-0d9c-4015-b877-06fc0dc9c908","Type":"ContainerStarted","Data":"32e42d79c17fc83fde5b485ce20866a335a2776dcd5735e4402b126111eb74fa"} Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.696569 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.698250 4748 generic.go:334] "Generic (PLEG): container finished" podID="26f08590-1f35-4ca4-b50d-5c342cde90a2" containerID="d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7" exitCode=137 Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.699292 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.699461 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26f08590-1f35-4ca4-b50d-5c342cde90a2","Type":"ContainerDied","Data":"d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7"} Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.699488 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"26f08590-1f35-4ca4-b50d-5c342cde90a2","Type":"ContainerDied","Data":"6c5bc258f83bc9a7bf664e017390e8cc3c971470ba54a9ae845fca090eebe2e1"} Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.699504 4748 scope.go:117] "RemoveContainer" containerID="d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.736922 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.320408168 podStartE2EDuration="2.736896242s" podCreationTimestamp="2026-02-16 15:16:17 +0000 UTC" firstStartedPulling="2026-02-16 15:16:18.622889532 +0000 UTC m=+1404.314558581" lastFinishedPulling="2026-02-16 15:16:19.039377606 +0000 UTC m=+1404.731046655" observedRunningTime="2026-02-16 15:16:19.718019187 +0000 UTC m=+1405.409688236" watchObservedRunningTime="2026-02-16 15:16:19.736896242 +0000 UTC m=+1405.428565281" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.737974 4748 scope.go:117] "RemoveContainer" containerID="d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7" Feb 16 15:16:19 crc kubenswrapper[4748]: E0216 15:16:19.738580 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7\": container with ID starting with d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7 not found: ID does not exist" containerID="d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.738636 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7"} err="failed to get container status \"d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7\": rpc error: code = NotFound desc = could not find container \"d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7\": container with ID starting with d939723b7908ce7f6a231bd9d806a87a0598fe2654eb96a8cb5cbcf26503fca7 not found: ID does not exist" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.750784 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-combined-ca-bundle\") pod \"26f08590-1f35-4ca4-b50d-5c342cde90a2\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.750831 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5qxj\" (UniqueName: \"kubernetes.io/projected/26f08590-1f35-4ca4-b50d-5c342cde90a2-kube-api-access-z5qxj\") pod \"26f08590-1f35-4ca4-b50d-5c342cde90a2\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.751020 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-config-data\") pod \"26f08590-1f35-4ca4-b50d-5c342cde90a2\" (UID: \"26f08590-1f35-4ca4-b50d-5c342cde90a2\") " Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.762486 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f08590-1f35-4ca4-b50d-5c342cde90a2-kube-api-access-z5qxj" (OuterVolumeSpecName: "kube-api-access-z5qxj") pod "26f08590-1f35-4ca4-b50d-5c342cde90a2" (UID: "26f08590-1f35-4ca4-b50d-5c342cde90a2"). InnerVolumeSpecName "kube-api-access-z5qxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.784933 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-config-data" (OuterVolumeSpecName: "config-data") pod "26f08590-1f35-4ca4-b50d-5c342cde90a2" (UID: "26f08590-1f35-4ca4-b50d-5c342cde90a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.792750 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26f08590-1f35-4ca4-b50d-5c342cde90a2" (UID: "26f08590-1f35-4ca4-b50d-5c342cde90a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.839824 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.840336 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.846814 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.850954 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.853382 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.853416 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26f08590-1f35-4ca4-b50d-5c342cde90a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:19 crc kubenswrapper[4748]: I0216 15:16:19.853430 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5qxj\" (UniqueName: \"kubernetes.io/projected/26f08590-1f35-4ca4-b50d-5c342cde90a2-kube-api-access-z5qxj\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.032440 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.044988 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.064197 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:16:20 crc kubenswrapper[4748]: E0216 15:16:20.064700 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f08590-1f35-4ca4-b50d-5c342cde90a2" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.064739 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f08590-1f35-4ca4-b50d-5c342cde90a2" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.066008 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f08590-1f35-4ca4-b50d-5c342cde90a2" containerName="nova-cell1-novncproxy-novncproxy" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.066821 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.068985 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.069223 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.071943 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.074417 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.161414 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9sbb\" (UniqueName: \"kubernetes.io/projected/5629100c-e11d-40b2-bb5a-d61200b4d405-kube-api-access-n9sbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.161867 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.161927 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.161965 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.162010 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.265003 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9sbb\" (UniqueName: \"kubernetes.io/projected/5629100c-e11d-40b2-bb5a-d61200b4d405-kube-api-access-n9sbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.265108 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.265181 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.265222 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.265266 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.269426 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.269479 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.269546 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.281597 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5629100c-e11d-40b2-bb5a-d61200b4d405-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.284154 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9sbb\" (UniqueName: \"kubernetes.io/projected/5629100c-e11d-40b2-bb5a-d61200b4d405-kube-api-access-n9sbb\") pod \"nova-cell1-novncproxy-0\" (UID: \"5629100c-e11d-40b2-bb5a-d61200b4d405\") " pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.434460 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.821227 4748 generic.go:334] "Generic (PLEG): container finished" podID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerID="8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f" exitCode=0 Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.821661 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerDied","Data":"8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f"} Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.846735 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.883301 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 16 15:16:20 crc kubenswrapper[4748]: I0216 15:16:20.907297 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.029654 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f08590-1f35-4ca4-b50d-5c342cde90a2" path="/var/lib/kubelet/pods/26f08590-1f35-4ca4-b50d-5c342cde90a2/volumes" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.106757 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4zqjr"] Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.109361 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.137543 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4zqjr"] Feb 16 15:16:21 crc kubenswrapper[4748]: E0216 15:16:21.162755 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:16:21 crc kubenswrapper[4748]: E0216 15:16:21.162813 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:16:21 crc kubenswrapper[4748]: E0216 15:16:21.162940 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:16:21 crc kubenswrapper[4748]: E0216 15:16:21.164109 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.203621 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.203696 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.203767 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm7pj\" (UniqueName: \"kubernetes.io/projected/9d8e8156-dfec-42df-bb06-3e31424d2642-kube-api-access-zm7pj\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.203783 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.204074 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-config\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.204305 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.306018 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm7pj\" (UniqueName: \"kubernetes.io/projected/9d8e8156-dfec-42df-bb06-3e31424d2642-kube-api-access-zm7pj\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.306375 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.306471 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-config\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.306538 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.306601 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.306631 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.307632 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.307670 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dxqgm"] Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.308510 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-config\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.309321 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.309844 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.309896 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.310314 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d8e8156-dfec-42df-bb06-3e31424d2642-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.349546 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm7pj\" (UniqueName: \"kubernetes.io/projected/9d8e8156-dfec-42df-bb06-3e31424d2642-kube-api-access-zm7pj\") pod \"dnsmasq-dns-89c5cd4d5-4zqjr\" (UID: \"9d8e8156-dfec-42df-bb06-3e31424d2642\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.352509 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxqgm"] Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.408431 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-utilities\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.408551 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-catalog-content\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.408600 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd7jf\" (UniqueName: \"kubernetes.io/projected/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-kube-api-access-rd7jf\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.482249 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.512095 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-catalog-content\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.512590 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-catalog-content\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.512806 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd7jf\" (UniqueName: \"kubernetes.io/projected/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-kube-api-access-rd7jf\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.513356 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-utilities\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.513639 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-utilities\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.537310 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd7jf\" (UniqueName: \"kubernetes.io/projected/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-kube-api-access-rd7jf\") pod \"certified-operators-dxqgm\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.630446 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.897031 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5629100c-e11d-40b2-bb5a-d61200b4d405","Type":"ContainerStarted","Data":"6ec2cfd05e40b7a707684dd7f8a1416ad64b3e2d8d0f13161acebd8df850b27a"} Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.897077 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"5629100c-e11d-40b2-bb5a-d61200b4d405","Type":"ContainerStarted","Data":"60447fcddda01df0d582f4444ebe3bfed59172033faed97bb8f6b35dc5d9e374"} Feb 16 15:16:21 crc kubenswrapper[4748]: I0216 15:16:21.938948 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.938919372 podStartE2EDuration="1.938919372s" podCreationTimestamp="2026-02-16 15:16:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:21.928606448 +0000 UTC m=+1407.620275487" watchObservedRunningTime="2026-02-16 15:16:21.938919372 +0000 UTC m=+1407.630588411" Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.100860 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4zqjr"] Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.322042 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dxqgm"] Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.910470 4748 generic.go:334] "Generic (PLEG): container finished" podID="9d8e8156-dfec-42df-bb06-3e31424d2642" containerID="6c2aaf6e6a6ff8f8c416915cd699958c0419c28077dd7e3bd2f8536561bc7fbc" exitCode=0 Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.910813 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" event={"ID":"9d8e8156-dfec-42df-bb06-3e31424d2642","Type":"ContainerDied","Data":"6c2aaf6e6a6ff8f8c416915cd699958c0419c28077dd7e3bd2f8536561bc7fbc"} Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.910841 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" event={"ID":"9d8e8156-dfec-42df-bb06-3e31424d2642","Type":"ContainerStarted","Data":"bcf79115fb628e281b9c712e8496d627c210c55cffd1403f83c47f838552101e"} Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.914553 4748 generic.go:334] "Generic (PLEG): container finished" podID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerID="4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e" exitCode=0 Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.914824 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxqgm" event={"ID":"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4","Type":"ContainerDied","Data":"4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e"} Feb 16 15:16:22 crc kubenswrapper[4748]: I0216 15:16:22.914897 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxqgm" event={"ID":"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4","Type":"ContainerStarted","Data":"7c775950f48d2fe9cc5370355d572fc6c4423e1931b6e9a95ee77608d420f030"} Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.665456 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.685130 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-run-httpd\") pod \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.685182 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-config-data\") pod \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.685295 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-scripts\") pod \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.685376 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-log-httpd\") pod \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.685391 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-combined-ca-bundle\") pod \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.685443 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-sg-core-conf-yaml\") pod \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.685460 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssqck\" (UniqueName: \"kubernetes.io/projected/7fbec113-0eb9-46a2-88e5-561d2057cfd8-kube-api-access-ssqck\") pod \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\" (UID: \"7fbec113-0eb9-46a2-88e5-561d2057cfd8\") " Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.687386 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7fbec113-0eb9-46a2-88e5-561d2057cfd8" (UID: "7fbec113-0eb9-46a2-88e5-561d2057cfd8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.687918 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7fbec113-0eb9-46a2-88e5-561d2057cfd8" (UID: "7fbec113-0eb9-46a2-88e5-561d2057cfd8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.698215 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fbec113-0eb9-46a2-88e5-561d2057cfd8-kube-api-access-ssqck" (OuterVolumeSpecName: "kube-api-access-ssqck") pod "7fbec113-0eb9-46a2-88e5-561d2057cfd8" (UID: "7fbec113-0eb9-46a2-88e5-561d2057cfd8"). InnerVolumeSpecName "kube-api-access-ssqck". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.741892 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-scripts" (OuterVolumeSpecName: "scripts") pod "7fbec113-0eb9-46a2-88e5-561d2057cfd8" (UID: "7fbec113-0eb9-46a2-88e5-561d2057cfd8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.788431 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.788465 4748 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.788474 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssqck\" (UniqueName: \"kubernetes.io/projected/7fbec113-0eb9-46a2-88e5-561d2057cfd8-kube-api-access-ssqck\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.788485 4748 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7fbec113-0eb9-46a2-88e5-561d2057cfd8-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.851762 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7fbec113-0eb9-46a2-88e5-561d2057cfd8" (UID: "7fbec113-0eb9-46a2-88e5-561d2057cfd8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.892225 4748 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.928520 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7fbec113-0eb9-46a2-88e5-561d2057cfd8" (UID: "7fbec113-0eb9-46a2-88e5-561d2057cfd8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.940402 4748 generic.go:334] "Generic (PLEG): container finished" podID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerID="c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da" exitCode=0 Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.940498 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerDied","Data":"c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da"} Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.940531 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7fbec113-0eb9-46a2-88e5-561d2057cfd8","Type":"ContainerDied","Data":"895d047c74dfeb2e4bb5d3539e85c538c144b046fa7061b719fd89b074a8d8d6"} Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.940568 4748 scope.go:117] "RemoveContainer" containerID="cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.940787 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.945857 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-config-data" (OuterVolumeSpecName: "config-data") pod "7fbec113-0eb9-46a2-88e5-561d2057cfd8" (UID: "7fbec113-0eb9-46a2-88e5-561d2057cfd8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.951374 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" event={"ID":"9d8e8156-dfec-42df-bb06-3e31424d2642","Type":"ContainerStarted","Data":"085ef474058ed21f129fe2f3c75ca0e30b6833b09a1815dcf12e199ef4d2a189"} Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.951816 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.972919 4748 scope.go:117] "RemoveContainer" containerID="b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.986102 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" podStartSLOduration=2.986080052 podStartE2EDuration="2.986080052s" podCreationTimestamp="2026-02-16 15:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:23.973962724 +0000 UTC m=+1409.665631763" watchObservedRunningTime="2026-02-16 15:16:23.986080052 +0000 UTC m=+1409.677749091" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.995421 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:23 crc kubenswrapper[4748]: I0216 15:16:23.995465 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fbec113-0eb9-46a2-88e5-561d2057cfd8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.004303 4748 scope.go:117] "RemoveContainer" containerID="c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.026782 4748 scope.go:117] "RemoveContainer" containerID="8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.053490 4748 scope.go:117] "RemoveContainer" containerID="cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944" Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.053850 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944\": container with ID starting with cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944 not found: ID does not exist" containerID="cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.053887 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944"} err="failed to get container status \"cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944\": rpc error: code = NotFound desc = could not find container \"cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944\": container with ID starting with cc42f3da48213cccd2963f5b2168ad43e57517e78cb1ac6e32718b7f6e287944 not found: ID does not exist" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.053916 4748 scope.go:117] "RemoveContainer" containerID="b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36" Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.054261 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36\": container with ID starting with b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36 not found: ID does not exist" containerID="b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.054310 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36"} err="failed to get container status \"b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36\": rpc error: code = NotFound desc = could not find container \"b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36\": container with ID starting with b3ec130bbcb87984ede4469e2176a7f7e9e8b8aa01579b48b84ca27b1a094d36 not found: ID does not exist" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.054348 4748 scope.go:117] "RemoveContainer" containerID="c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da" Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.054635 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da\": container with ID starting with c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da not found: ID does not exist" containerID="c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.054658 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da"} err="failed to get container status \"c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da\": rpc error: code = NotFound desc = could not find container \"c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da\": container with ID starting with c831c6d13354acaf284076b8d96b932d46451d64c082e7ddea8cecd6ab4449da not found: ID does not exist" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.054677 4748 scope.go:117] "RemoveContainer" containerID="8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f" Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.054922 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f\": container with ID starting with 8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f not found: ID does not exist" containerID="8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.054942 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f"} err="failed to get container status \"8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f\": rpc error: code = NotFound desc = could not find container \"8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f\": container with ID starting with 8478181c6c4655b808f673111bcf0268d7f82390ecf23cb2dac6c9cc32e3af3f not found: ID does not exist" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.276338 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.288646 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.332681 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.333132 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-notification-agent" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333149 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-notification-agent" Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.333180 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-central-agent" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333187 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-central-agent" Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.333194 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="proxy-httpd" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333199 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="proxy-httpd" Feb 16 15:16:24 crc kubenswrapper[4748]: E0216 15:16:24.333217 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="sg-core" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333223 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="sg-core" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333409 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="sg-core" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333428 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="proxy-httpd" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333438 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-central-agent" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.333451 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" containerName="ceilometer-notification-agent" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.337003 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.339785 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.340102 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.345069 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.355513 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.403414 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-log-httpd\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.403470 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vhzb\" (UniqueName: \"kubernetes.io/projected/657f75f7-b5ac-4748-a65e-2b8155d72262-kube-api-access-9vhzb\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.403608 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.403857 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-config-data\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.403942 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-scripts\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.404007 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.404077 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.404133 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-run-httpd\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506467 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-log-httpd\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506538 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vhzb\" (UniqueName: \"kubernetes.io/projected/657f75f7-b5ac-4748-a65e-2b8155d72262-kube-api-access-9vhzb\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506590 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506685 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-config-data\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506806 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-scripts\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506857 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506902 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.506941 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-run-httpd\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.507071 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-log-httpd\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.507533 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-run-httpd\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.511277 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-config-data\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.511300 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.512211 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.516396 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-scripts\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.527940 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.530475 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vhzb\" (UniqueName: \"kubernetes.io/projected/657f75f7-b5ac-4748-a65e-2b8155d72262-kube-api-access-9vhzb\") pod \"ceilometer-0\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.606693 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.607077 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-log" containerID="cri-o://4358ec7c551b9fe017be57357c84ab4d57be6cb933315ff3fa74e07536c46b8d" gracePeriod=30 Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.607279 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-api" containerID="cri-o://9ab190eca756fb1134284e8497b0dd50afcb6ba22e914ee74352d9097e0fb4b7" gracePeriod=30 Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.658646 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.971215 4748 generic.go:334] "Generic (PLEG): container finished" podID="04ec54af-7987-472f-82ea-f761231ca3f6" containerID="4358ec7c551b9fe017be57357c84ab4d57be6cb933315ff3fa74e07536c46b8d" exitCode=143 Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.971383 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04ec54af-7987-472f-82ea-f761231ca3f6","Type":"ContainerDied","Data":"4358ec7c551b9fe017be57357c84ab4d57be6cb933315ff3fa74e07536c46b8d"} Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.979631 4748 generic.go:334] "Generic (PLEG): container finished" podID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerID="1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451" exitCode=0 Feb 16 15:16:24 crc kubenswrapper[4748]: I0216 15:16:24.979737 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxqgm" event={"ID":"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4","Type":"ContainerDied","Data":"1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451"} Feb 16 15:16:25 crc kubenswrapper[4748]: I0216 15:16:25.029106 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fbec113-0eb9-46a2-88e5-561d2057cfd8" path="/var/lib/kubelet/pods/7fbec113-0eb9-46a2-88e5-561d2057cfd8/volumes" Feb 16 15:16:25 crc kubenswrapper[4748]: I0216 15:16:25.289361 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:25 crc kubenswrapper[4748]: W0216 15:16:25.301230 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod657f75f7_b5ac_4748_a65e_2b8155d72262.slice/crio-2b867af6dc21323cc25a7ba39801250a7277ee4c3c5b30c22a2fee69203dcb27 WatchSource:0}: Error finding container 2b867af6dc21323cc25a7ba39801250a7277ee4c3c5b30c22a2fee69203dcb27: Status 404 returned error can't find the container with id 2b867af6dc21323cc25a7ba39801250a7277ee4c3c5b30c22a2fee69203dcb27 Feb 16 15:16:25 crc kubenswrapper[4748]: I0216 15:16:25.435172 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:26 crc kubenswrapper[4748]: I0216 15:16:26.027018 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxqgm" event={"ID":"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4","Type":"ContainerStarted","Data":"324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1"} Feb 16 15:16:26 crc kubenswrapper[4748]: I0216 15:16:26.029968 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerStarted","Data":"2b867af6dc21323cc25a7ba39801250a7277ee4c3c5b30c22a2fee69203dcb27"} Feb 16 15:16:26 crc kubenswrapper[4748]: I0216 15:16:26.059416 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dxqgm" podStartSLOduration=2.459692865 podStartE2EDuration="5.059392786s" podCreationTimestamp="2026-02-16 15:16:21 +0000 UTC" firstStartedPulling="2026-02-16 15:16:22.919845798 +0000 UTC m=+1408.611514837" lastFinishedPulling="2026-02-16 15:16:25.519545719 +0000 UTC m=+1411.211214758" observedRunningTime="2026-02-16 15:16:26.047199186 +0000 UTC m=+1411.738868225" watchObservedRunningTime="2026-02-16 15:16:26.059392786 +0000 UTC m=+1411.751061825" Feb 16 15:16:26 crc kubenswrapper[4748]: I0216 15:16:26.711269 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:27 crc kubenswrapper[4748]: I0216 15:16:27.042805 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerStarted","Data":"907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6"} Feb 16 15:16:27 crc kubenswrapper[4748]: I0216 15:16:27.043239 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerStarted","Data":"c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e"} Feb 16 15:16:27 crc kubenswrapper[4748]: I0216 15:16:27.384299 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5l4k9" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:16:27 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:16:27 crc kubenswrapper[4748]: > Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.057016 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerStarted","Data":"8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc"} Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.060402 4748 generic.go:334] "Generic (PLEG): container finished" podID="04ec54af-7987-472f-82ea-f761231ca3f6" containerID="9ab190eca756fb1134284e8497b0dd50afcb6ba22e914ee74352d9097e0fb4b7" exitCode=0 Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.060432 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04ec54af-7987-472f-82ea-f761231ca3f6","Type":"ContainerDied","Data":"9ab190eca756fb1134284e8497b0dd50afcb6ba22e914ee74352d9097e0fb4b7"} Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.095369 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.375044 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.427502 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7grx\" (UniqueName: \"kubernetes.io/projected/04ec54af-7987-472f-82ea-f761231ca3f6-kube-api-access-s7grx\") pod \"04ec54af-7987-472f-82ea-f761231ca3f6\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.427662 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04ec54af-7987-472f-82ea-f761231ca3f6-logs\") pod \"04ec54af-7987-472f-82ea-f761231ca3f6\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.427730 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-config-data\") pod \"04ec54af-7987-472f-82ea-f761231ca3f6\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.427836 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-combined-ca-bundle\") pod \"04ec54af-7987-472f-82ea-f761231ca3f6\" (UID: \"04ec54af-7987-472f-82ea-f761231ca3f6\") " Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.428093 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04ec54af-7987-472f-82ea-f761231ca3f6-logs" (OuterVolumeSpecName: "logs") pod "04ec54af-7987-472f-82ea-f761231ca3f6" (UID: "04ec54af-7987-472f-82ea-f761231ca3f6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.428556 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04ec54af-7987-472f-82ea-f761231ca3f6-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.433479 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ec54af-7987-472f-82ea-f761231ca3f6-kube-api-access-s7grx" (OuterVolumeSpecName: "kube-api-access-s7grx") pod "04ec54af-7987-472f-82ea-f761231ca3f6" (UID: "04ec54af-7987-472f-82ea-f761231ca3f6"). InnerVolumeSpecName "kube-api-access-s7grx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.473229 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-config-data" (OuterVolumeSpecName: "config-data") pod "04ec54af-7987-472f-82ea-f761231ca3f6" (UID: "04ec54af-7987-472f-82ea-f761231ca3f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.474092 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04ec54af-7987-472f-82ea-f761231ca3f6" (UID: "04ec54af-7987-472f-82ea-f761231ca3f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.533937 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7grx\" (UniqueName: \"kubernetes.io/projected/04ec54af-7987-472f-82ea-f761231ca3f6-kube-api-access-s7grx\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.533982 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:28 crc kubenswrapper[4748]: I0216 15:16:28.534044 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ec54af-7987-472f-82ea-f761231ca3f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.070669 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"04ec54af-7987-472f-82ea-f761231ca3f6","Type":"ContainerDied","Data":"392406f6c35882ee8960c43ec99e4121e7526a91cd7a495416fcdb1ad4898e06"} Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.070743 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.070763 4748 scope.go:117] "RemoveContainer" containerID="9ab190eca756fb1134284e8497b0dd50afcb6ba22e914ee74352d9097e0fb4b7" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.102516 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.105914 4748 scope.go:117] "RemoveContainer" containerID="4358ec7c551b9fe017be57357c84ab4d57be6cb933315ff3fa74e07536c46b8d" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.120990 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.138794 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:29 crc kubenswrapper[4748]: E0216 15:16:29.139323 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-api" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.139349 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-api" Feb 16 15:16:29 crc kubenswrapper[4748]: E0216 15:16:29.139394 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-log" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.139402 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-log" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.139660 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-api" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.139700 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" containerName="nova-api-log" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.141158 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.145177 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.145267 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.145365 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.165544 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.257513 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84sjf\" (UniqueName: \"kubernetes.io/projected/6c7d6c0e-64e8-4bc9-8d79-64c126969605-kube-api-access-84sjf\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.258106 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7d6c0e-64e8-4bc9-8d79-64c126969605-logs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.258247 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.258370 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.258539 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-config-data\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.258653 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.360452 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84sjf\" (UniqueName: \"kubernetes.io/projected/6c7d6c0e-64e8-4bc9-8d79-64c126969605-kube-api-access-84sjf\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.360842 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7d6c0e-64e8-4bc9-8d79-64c126969605-logs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.360900 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.360952 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.361040 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-config-data\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.361071 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.362758 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7d6c0e-64e8-4bc9-8d79-64c126969605-logs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.375490 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-public-tls-certs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.379593 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.381112 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-internal-tls-certs\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.386415 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84sjf\" (UniqueName: \"kubernetes.io/projected/6c7d6c0e-64e8-4bc9-8d79-64c126969605-kube-api-access-84sjf\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.387299 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-config-data\") pod \"nova-api-0\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " pod="openstack/nova-api-0" Feb 16 15:16:29 crc kubenswrapper[4748]: I0216 15:16:29.633798 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.085967 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerStarted","Data":"d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8"} Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.087127 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.086491 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-central-agent" containerID="cri-o://c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e" gracePeriod=30 Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.087204 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="proxy-httpd" containerID="cri-o://d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8" gracePeriod=30 Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.087477 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="sg-core" containerID="cri-o://8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc" gracePeriod=30 Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.087517 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-notification-agent" containerID="cri-o://907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6" gracePeriod=30 Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.118912 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.26931518 podStartE2EDuration="6.118887361s" podCreationTimestamp="2026-02-16 15:16:24 +0000 UTC" firstStartedPulling="2026-02-16 15:16:25.304675154 +0000 UTC m=+1410.996344193" lastFinishedPulling="2026-02-16 15:16:29.154247335 +0000 UTC m=+1414.845916374" observedRunningTime="2026-02-16 15:16:30.112885414 +0000 UTC m=+1415.804554473" watchObservedRunningTime="2026-02-16 15:16:30.118887361 +0000 UTC m=+1415.810556400" Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.163353 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.435460 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:30 crc kubenswrapper[4748]: I0216 15:16:30.459451 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.008695 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ec54af-7987-472f-82ea-f761231ca3f6" path="/var/lib/kubelet/pods/04ec54af-7987-472f-82ea-f761231ca3f6/volumes" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.098437 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c7d6c0e-64e8-4bc9-8d79-64c126969605","Type":"ContainerStarted","Data":"8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e"} Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.098492 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c7d6c0e-64e8-4bc9-8d79-64c126969605","Type":"ContainerStarted","Data":"b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41"} Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.098507 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c7d6c0e-64e8-4bc9-8d79-64c126969605","Type":"ContainerStarted","Data":"84667fdd8b76bd3cf0e88e91496b51fd5d19d299d06d0ef9efbe6e3d041bc4e9"} Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.116242 4748 generic.go:334] "Generic (PLEG): container finished" podID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerID="d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8" exitCode=0 Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.116273 4748 generic.go:334] "Generic (PLEG): container finished" podID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerID="8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc" exitCode=2 Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.116282 4748 generic.go:334] "Generic (PLEG): container finished" podID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerID="907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6" exitCode=0 Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.116332 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerDied","Data":"d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8"} Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.116690 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerDied","Data":"8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc"} Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.116707 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerDied","Data":"907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6"} Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.127055 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.127035477 podStartE2EDuration="2.127035477s" podCreationTimestamp="2026-02-16 15:16:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:31.124616708 +0000 UTC m=+1416.816285767" watchObservedRunningTime="2026-02-16 15:16:31.127035477 +0000 UTC m=+1416.818704516" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.148144 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.336109 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-kpzzb"] Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.337774 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.339595 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.339692 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.357420 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kpzzb"] Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.404740 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.404873 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-config-data\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.405119 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-scripts\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.405455 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgnbz\" (UniqueName: \"kubernetes.io/projected/d5e00d08-e68a-479e-961b-38bc4e12b351-kube-api-access-zgnbz\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.483679 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-4zqjr" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.508042 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgnbz\" (UniqueName: \"kubernetes.io/projected/d5e00d08-e68a-479e-961b-38bc4e12b351-kube-api-access-zgnbz\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.508131 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.508200 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-config-data\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.508285 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-scripts\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.514011 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.515654 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-config-data\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.516381 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-scripts\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.543568 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgnbz\" (UniqueName: \"kubernetes.io/projected/d5e00d08-e68a-479e-961b-38bc4e12b351-kube-api-access-zgnbz\") pod \"nova-cell1-cell-mapping-kpzzb\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.565206 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-75hvp"] Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.565456 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" podUID="be837065-1402-43d8-a26a-b3997a11e226" containerName="dnsmasq-dns" containerID="cri-o://033e37a7466be3463c27114d3ee521be2206cd1ee8fa34c1ddcd075797cd34fb" gracePeriod=10 Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.631958 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.632366 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.659424 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:31 crc kubenswrapper[4748]: I0216 15:16:31.762427 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.141446 4748 generic.go:334] "Generic (PLEG): container finished" podID="be837065-1402-43d8-a26a-b3997a11e226" containerID="033e37a7466be3463c27114d3ee521be2206cd1ee8fa34c1ddcd075797cd34fb" exitCode=0 Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.141924 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" event={"ID":"be837065-1402-43d8-a26a-b3997a11e226","Type":"ContainerDied","Data":"033e37a7466be3463c27114d3ee521be2206cd1ee8fa34c1ddcd075797cd34fb"} Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.207266 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.332882 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.433575 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-kpzzb"] Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.452965 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pk42k\" (UniqueName: \"kubernetes.io/projected/be837065-1402-43d8-a26a-b3997a11e226-kube-api-access-pk42k\") pod \"be837065-1402-43d8-a26a-b3997a11e226\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.453136 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-swift-storage-0\") pod \"be837065-1402-43d8-a26a-b3997a11e226\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.453168 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-config\") pod \"be837065-1402-43d8-a26a-b3997a11e226\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.453206 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-sb\") pod \"be837065-1402-43d8-a26a-b3997a11e226\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.453272 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-nb\") pod \"be837065-1402-43d8-a26a-b3997a11e226\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.453304 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-svc\") pod \"be837065-1402-43d8-a26a-b3997a11e226\" (UID: \"be837065-1402-43d8-a26a-b3997a11e226\") " Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.463928 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be837065-1402-43d8-a26a-b3997a11e226-kube-api-access-pk42k" (OuterVolumeSpecName: "kube-api-access-pk42k") pod "be837065-1402-43d8-a26a-b3997a11e226" (UID: "be837065-1402-43d8-a26a-b3997a11e226"). InnerVolumeSpecName "kube-api-access-pk42k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.555788 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pk42k\" (UniqueName: \"kubernetes.io/projected/be837065-1402-43d8-a26a-b3997a11e226-kube-api-access-pk42k\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.576550 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "be837065-1402-43d8-a26a-b3997a11e226" (UID: "be837065-1402-43d8-a26a-b3997a11e226"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.595221 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "be837065-1402-43d8-a26a-b3997a11e226" (UID: "be837065-1402-43d8-a26a-b3997a11e226"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.603379 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "be837065-1402-43d8-a26a-b3997a11e226" (UID: "be837065-1402-43d8-a26a-b3997a11e226"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.642543 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-config" (OuterVolumeSpecName: "config") pod "be837065-1402-43d8-a26a-b3997a11e226" (UID: "be837065-1402-43d8-a26a-b3997a11e226"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.660338 4748 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-config\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.660392 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.660404 4748 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.660415 4748 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.682996 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "be837065-1402-43d8-a26a-b3997a11e226" (UID: "be837065-1402-43d8-a26a-b3997a11e226"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:16:32 crc kubenswrapper[4748]: I0216 15:16:32.761830 4748 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/be837065-1402-43d8-a26a-b3997a11e226-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:32 crc kubenswrapper[4748]: E0216 15:16:32.996294 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.090468 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxqgm"] Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.154435 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" event={"ID":"be837065-1402-43d8-a26a-b3997a11e226","Type":"ContainerDied","Data":"3ebc1cbb98255d981d471612d2afd1fbcca0242f77c930de4bdc5efed7e5cca1"} Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.154489 4748 scope.go:117] "RemoveContainer" containerID="033e37a7466be3463c27114d3ee521be2206cd1ee8fa34c1ddcd075797cd34fb" Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.154631 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.164404 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kpzzb" event={"ID":"d5e00d08-e68a-479e-961b-38bc4e12b351","Type":"ContainerStarted","Data":"d53711d99957759a4e5a8ebb3530b2da6e6eeb4d48c822ace2e842d1fa401f2b"} Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.164454 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kpzzb" event={"ID":"d5e00d08-e68a-479e-961b-38bc4e12b351","Type":"ContainerStarted","Data":"32de0af9bca4d5587d0079307f259960ee20ab1b8fadb0a5627084a4f78461ad"} Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.196475 4748 scope.go:117] "RemoveContainer" containerID="c7075cd713a5388fd01a246edfda53571d4105aa1b19b23f02d4934bd871765f" Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.208962 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-75hvp"] Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.221955 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-75hvp"] Feb 16 15:16:33 crc kubenswrapper[4748]: I0216 15:16:33.225024 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-kpzzb" podStartSLOduration=2.225002367 podStartE2EDuration="2.225002367s" podCreationTimestamp="2026-02-16 15:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:33.189885174 +0000 UTC m=+1418.881554223" watchObservedRunningTime="2026-02-16 15:16:33.225002367 +0000 UTC m=+1418.916671406" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.177568 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dxqgm" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="registry-server" containerID="cri-o://324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1" gracePeriod=2 Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.729672 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.729767 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.777035 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.809489 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-catalog-content\") pod \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.809535 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd7jf\" (UniqueName: \"kubernetes.io/projected/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-kube-api-access-rd7jf\") pod \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.810654 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-utilities\") pod \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\" (UID: \"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4\") " Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.811075 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-utilities" (OuterVolumeSpecName: "utilities") pod "a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" (UID: "a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.811267 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.814706 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-kube-api-access-rd7jf" (OuterVolumeSpecName: "kube-api-access-rd7jf") pod "a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" (UID: "a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4"). InnerVolumeSpecName "kube-api-access-rd7jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.857648 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" (UID: "a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.914200 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:34 crc kubenswrapper[4748]: I0216 15:16:34.914251 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd7jf\" (UniqueName: \"kubernetes.io/projected/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4-kube-api-access-rd7jf\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.019242 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be837065-1402-43d8-a26a-b3997a11e226" path="/var/lib/kubelet/pods/be837065-1402-43d8-a26a-b3997a11e226/volumes" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.188129 4748 generic.go:334] "Generic (PLEG): container finished" podID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerID="324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1" exitCode=0 Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.188170 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxqgm" event={"ID":"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4","Type":"ContainerDied","Data":"324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1"} Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.188197 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dxqgm" event={"ID":"a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4","Type":"ContainerDied","Data":"7c775950f48d2fe9cc5370355d572fc6c4423e1931b6e9a95ee77608d420f030"} Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.188212 4748 scope.go:117] "RemoveContainer" containerID="324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.188346 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dxqgm" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.226281 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dxqgm"] Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.239130 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dxqgm"] Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.291115 4748 scope.go:117] "RemoveContainer" containerID="1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.352192 4748 scope.go:117] "RemoveContainer" containerID="4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.384316 4748 scope.go:117] "RemoveContainer" containerID="324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1" Feb 16 15:16:35 crc kubenswrapper[4748]: E0216 15:16:35.387196 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1\": container with ID starting with 324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1 not found: ID does not exist" containerID="324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.387236 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1"} err="failed to get container status \"324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1\": rpc error: code = NotFound desc = could not find container \"324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1\": container with ID starting with 324971e42569bebb4ab85b6d70d1f9ba45167e7062c856964e9838ae693330f1 not found: ID does not exist" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.387257 4748 scope.go:117] "RemoveContainer" containerID="1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451" Feb 16 15:16:35 crc kubenswrapper[4748]: E0216 15:16:35.387690 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451\": container with ID starting with 1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451 not found: ID does not exist" containerID="1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.387747 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451"} err="failed to get container status \"1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451\": rpc error: code = NotFound desc = could not find container \"1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451\": container with ID starting with 1249bfbf3f0ca6619f94e978812e5f27aed30daf15b4892dcd145006b3631451 not found: ID does not exist" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.387778 4748 scope.go:117] "RemoveContainer" containerID="4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e" Feb 16 15:16:35 crc kubenswrapper[4748]: E0216 15:16:35.388129 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e\": container with ID starting with 4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e not found: ID does not exist" containerID="4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.388150 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e"} err="failed to get container status \"4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e\": rpc error: code = NotFound desc = could not find container \"4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e\": container with ID starting with 4f74fd7d7818fc8bfadd78641f1a1ed790aee88e5dccf101bea3429f58b6a90e not found: ID does not exist" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.644958 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.736481 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-sg-core-conf-yaml\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.736571 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-config-data\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.736769 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-run-httpd\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.736815 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-combined-ca-bundle\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.736868 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-log-httpd\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.736892 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-ceilometer-tls-certs\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.736927 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vhzb\" (UniqueName: \"kubernetes.io/projected/657f75f7-b5ac-4748-a65e-2b8155d72262-kube-api-access-9vhzb\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.737319 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.737377 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.737405 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-scripts\") pod \"657f75f7-b5ac-4748-a65e-2b8155d72262\" (UID: \"657f75f7-b5ac-4748-a65e-2b8155d72262\") " Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.738132 4748 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.738154 4748 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/657f75f7-b5ac-4748-a65e-2b8155d72262-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.741968 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-scripts" (OuterVolumeSpecName: "scripts") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.742926 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/657f75f7-b5ac-4748-a65e-2b8155d72262-kube-api-access-9vhzb" (OuterVolumeSpecName: "kube-api-access-9vhzb") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "kube-api-access-9vhzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.775169 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.807503 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.822290 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.840182 4748 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.840449 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vhzb\" (UniqueName: \"kubernetes.io/projected/657f75f7-b5ac-4748-a65e-2b8155d72262-kube-api-access-9vhzb\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.840540 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.840602 4748 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.840668 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.862131 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-config-data" (OuterVolumeSpecName: "config-data") pod "657f75f7-b5ac-4748-a65e-2b8155d72262" (UID: "657f75f7-b5ac-4748-a65e-2b8155d72262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:35 crc kubenswrapper[4748]: I0216 15:16:35.942101 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/657f75f7-b5ac-4748-a65e-2b8155d72262-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.204983 4748 generic.go:334] "Generic (PLEG): container finished" podID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerID="c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e" exitCode=0 Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.205063 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerDied","Data":"c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e"} Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.205069 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.205095 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"657f75f7-b5ac-4748-a65e-2b8155d72262","Type":"ContainerDied","Data":"2b867af6dc21323cc25a7ba39801250a7277ee4c3c5b30c22a2fee69203dcb27"} Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.205117 4748 scope.go:117] "RemoveContainer" containerID="d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.235970 4748 scope.go:117] "RemoveContainer" containerID="8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.265848 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.268285 4748 scope.go:117] "RemoveContainer" containerID="907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.281916 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.297826 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298250 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be837065-1402-43d8-a26a-b3997a11e226" containerName="init" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298267 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="be837065-1402-43d8-a26a-b3997a11e226" containerName="init" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298283 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="extract-content" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298290 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="extract-content" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298303 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="proxy-httpd" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298310 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="proxy-httpd" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298322 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="sg-core" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298328 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="sg-core" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298348 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-notification-agent" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298354 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-notification-agent" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298367 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be837065-1402-43d8-a26a-b3997a11e226" containerName="dnsmasq-dns" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298372 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="be837065-1402-43d8-a26a-b3997a11e226" containerName="dnsmasq-dns" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298387 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="registry-server" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298394 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="registry-server" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298413 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="extract-utilities" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298421 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="extract-utilities" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.298433 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-central-agent" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298439 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-central-agent" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298639 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-central-agent" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298654 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="ceilometer-notification-agent" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298674 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="proxy-httpd" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298684 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" containerName="sg-core" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298694 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" containerName="registry-server" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.298704 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="be837065-1402-43d8-a26a-b3997a11e226" containerName="dnsmasq-dns" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.300472 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.308201 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.308422 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.311936 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.321731 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.335881 4748 scope.go:117] "RemoveContainer" containerID="c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.350613 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.350676 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60df5608-339f-4262-8459-eb5359287bd8-log-httpd\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.350758 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbq8l\" (UniqueName: \"kubernetes.io/projected/60df5608-339f-4262-8459-eb5359287bd8-kube-api-access-wbq8l\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.350782 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60df5608-339f-4262-8459-eb5359287bd8-run-httpd\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.350801 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.350824 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-config-data\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.368371 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.374008 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-scripts\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.448860 4748 scope.go:117] "RemoveContainer" containerID="d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.458355 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8\": container with ID starting with d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8 not found: ID does not exist" containerID="d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.458403 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8"} err="failed to get container status \"d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8\": rpc error: code = NotFound desc = could not find container \"d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8\": container with ID starting with d14499c76d9254d53c92e85feb9feef0f1b7d9b6a5a7aef3763760348f2da2c8 not found: ID does not exist" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.458430 4748 scope.go:117] "RemoveContainer" containerID="8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.478941 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc\": container with ID starting with 8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc not found: ID does not exist" containerID="8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.478989 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc"} err="failed to get container status \"8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc\": rpc error: code = NotFound desc = could not find container \"8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc\": container with ID starting with 8a73471269b6c70a3c2cd97fde71eb33321dfb2d447e345ea75661ac8680d8bc not found: ID does not exist" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.479027 4748 scope.go:117] "RemoveContainer" containerID="907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480074 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-config-data\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480125 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480174 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-scripts\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480241 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480285 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60df5608-339f-4262-8459-eb5359287bd8-log-httpd\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480348 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbq8l\" (UniqueName: \"kubernetes.io/projected/60df5608-339f-4262-8459-eb5359287bd8-kube-api-access-wbq8l\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480365 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60df5608-339f-4262-8459-eb5359287bd8-run-httpd\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.480383 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.481558 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60df5608-339f-4262-8459-eb5359287bd8-log-httpd\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.481832 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/60df5608-339f-4262-8459-eb5359287bd8-run-httpd\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.490938 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6\": container with ID starting with 907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6 not found: ID does not exist" containerID="907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.490991 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6"} err="failed to get container status \"907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6\": rpc error: code = NotFound desc = could not find container \"907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6\": container with ID starting with 907491e9de93c71968250e704dfb9ffbb94d644a32498adc07344b1770e70cd6 not found: ID does not exist" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.491018 4748 scope.go:117] "RemoveContainer" containerID="c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.495483 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: E0216 15:16:36.495605 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e\": container with ID starting with c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e not found: ID does not exist" containerID="c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.495634 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e"} err="failed to get container status \"c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e\": rpc error: code = NotFound desc = could not find container \"c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e\": container with ID starting with c9efff63fd2ff338681de15989b9ec599c8775e082077709546e39f5024a6c5e not found: ID does not exist" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.507520 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.507855 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbq8l\" (UniqueName: \"kubernetes.io/projected/60df5608-339f-4262-8459-eb5359287bd8-kube-api-access-wbq8l\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.508994 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-config-data\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.521491 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-scripts\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.527582 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60df5608-339f-4262-8459-eb5359287bd8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"60df5608-339f-4262-8459-eb5359287bd8\") " pod="openstack/ceilometer-0" Feb 16 15:16:36 crc kubenswrapper[4748]: I0216 15:16:36.713641 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 16 15:16:37 crc kubenswrapper[4748]: I0216 15:16:37.007283 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="657f75f7-b5ac-4748-a65e-2b8155d72262" path="/var/lib/kubelet/pods/657f75f7-b5ac-4748-a65e-2b8155d72262/volumes" Feb 16 15:16:37 crc kubenswrapper[4748]: I0216 15:16:37.009049 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4" path="/var/lib/kubelet/pods/a0d6f3a9-eaa3-4518-81ff-bd1fa5c1a2c4/volumes" Feb 16 15:16:37 crc kubenswrapper[4748]: I0216 15:16:37.096707 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-75hvp" podUID="be837065-1402-43d8-a26a-b3997a11e226" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.212:5353: i/o timeout" Feb 16 15:16:37 crc kubenswrapper[4748]: I0216 15:16:37.189405 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 16 15:16:37 crc kubenswrapper[4748]: I0216 15:16:37.218295 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60df5608-339f-4262-8459-eb5359287bd8","Type":"ContainerStarted","Data":"2dd0949f1fcd089d3a4cc60709385d60e3bddedef6b102c54bb6b073f37b9412"} Feb 16 15:16:37 crc kubenswrapper[4748]: I0216 15:16:37.427200 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5l4k9" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:16:37 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:16:37 crc kubenswrapper[4748]: > Feb 16 15:16:38 crc kubenswrapper[4748]: I0216 15:16:38.232093 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60df5608-339f-4262-8459-eb5359287bd8","Type":"ContainerStarted","Data":"4b635c313bb2d59eb39e4b0f6a91783be990ba212e952654e345b06d696d254b"} Feb 16 15:16:39 crc kubenswrapper[4748]: I0216 15:16:39.247406 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60df5608-339f-4262-8459-eb5359287bd8","Type":"ContainerStarted","Data":"2daa964a58b3a3e88a5f9661839234cce99aadd385c203a45632f1e570392cb3"} Feb 16 15:16:39 crc kubenswrapper[4748]: I0216 15:16:39.247769 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60df5608-339f-4262-8459-eb5359287bd8","Type":"ContainerStarted","Data":"f751f83b6ade9463c08e23ac55b840f793d360df12075c18d193d39b097ac3ea"} Feb 16 15:16:39 crc kubenswrapper[4748]: I0216 15:16:39.255491 4748 generic.go:334] "Generic (PLEG): container finished" podID="d5e00d08-e68a-479e-961b-38bc4e12b351" containerID="d53711d99957759a4e5a8ebb3530b2da6e6eeb4d48c822ace2e842d1fa401f2b" exitCode=0 Feb 16 15:16:39 crc kubenswrapper[4748]: I0216 15:16:39.255540 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kpzzb" event={"ID":"d5e00d08-e68a-479e-961b-38bc4e12b351","Type":"ContainerDied","Data":"d53711d99957759a4e5a8ebb3530b2da6e6eeb4d48c822ace2e842d1fa401f2b"} Feb 16 15:16:39 crc kubenswrapper[4748]: I0216 15:16:39.634382 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:16:39 crc kubenswrapper[4748]: I0216 15:16:39.634689 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:40.649033 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:40.649986 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:40.890241 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:40.997076 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgnbz\" (UniqueName: \"kubernetes.io/projected/d5e00d08-e68a-479e-961b-38bc4e12b351-kube-api-access-zgnbz\") pod \"d5e00d08-e68a-479e-961b-38bc4e12b351\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:40.997619 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-config-data\") pod \"d5e00d08-e68a-479e-961b-38bc4e12b351\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:40.997787 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-combined-ca-bundle\") pod \"d5e00d08-e68a-479e-961b-38bc4e12b351\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:40.997966 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-scripts\") pod \"d5e00d08-e68a-479e-961b-38bc4e12b351\" (UID: \"d5e00d08-e68a-479e-961b-38bc4e12b351\") " Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.004321 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-scripts" (OuterVolumeSpecName: "scripts") pod "d5e00d08-e68a-479e-961b-38bc4e12b351" (UID: "d5e00d08-e68a-479e-961b-38bc4e12b351"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.005892 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5e00d08-e68a-479e-961b-38bc4e12b351-kube-api-access-zgnbz" (OuterVolumeSpecName: "kube-api-access-zgnbz") pod "d5e00d08-e68a-479e-961b-38bc4e12b351" (UID: "d5e00d08-e68a-479e-961b-38bc4e12b351"). InnerVolumeSpecName "kube-api-access-zgnbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.034809 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-config-data" (OuterVolumeSpecName: "config-data") pod "d5e00d08-e68a-479e-961b-38bc4e12b351" (UID: "d5e00d08-e68a-479e-961b-38bc4e12b351"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.037269 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5e00d08-e68a-479e-961b-38bc4e12b351" (UID: "d5e00d08-e68a-479e-961b-38bc4e12b351"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.106774 4748 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-scripts\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.107106 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgnbz\" (UniqueName: \"kubernetes.io/projected/d5e00d08-e68a-479e-961b-38bc4e12b351-kube-api-access-zgnbz\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.107122 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.107133 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5e00d08-e68a-479e-961b-38bc4e12b351-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.275730 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-kpzzb" event={"ID":"d5e00d08-e68a-479e-961b-38bc4e12b351","Type":"ContainerDied","Data":"32de0af9bca4d5587d0079307f259960ee20ab1b8fadb0a5627084a4f78461ad"} Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.276088 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32de0af9bca4d5587d0079307f259960ee20ab1b8fadb0a5627084a4f78461ad" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.276115 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-kpzzb" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.278828 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"60df5608-339f-4262-8459-eb5359287bd8","Type":"ContainerStarted","Data":"387e01e39ed1f8938dc7b12deb117e7f240d2cdee4757afc26f3eed69a206609"} Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.279014 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.326910 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.147865378 podStartE2EDuration="5.326888468s" podCreationTimestamp="2026-02-16 15:16:36 +0000 UTC" firstStartedPulling="2026-02-16 15:16:37.196015807 +0000 UTC m=+1422.887684836" lastFinishedPulling="2026-02-16 15:16:40.375038887 +0000 UTC m=+1426.066707926" observedRunningTime="2026-02-16 15:16:41.307019369 +0000 UTC m=+1426.998688408" watchObservedRunningTime="2026-02-16 15:16:41.326888468 +0000 UTC m=+1427.018557507" Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.511161 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.511417 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6f0209e1-f503-4d61-8205-d0d56a2f754e" containerName="nova-scheduler-scheduler" containerID="cri-o://ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" gracePeriod=30 Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.524401 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.524643 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-log" containerID="cri-o://b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41" gracePeriod=30 Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.525257 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-api" containerID="cri-o://8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e" gracePeriod=30 Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.551740 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.551988 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-log" containerID="cri-o://b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526" gracePeriod=30 Feb 16 15:16:41 crc kubenswrapper[4748]: I0216 15:16:41.552077 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-metadata" containerID="cri-o://abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de" gracePeriod=30 Feb 16 15:16:42 crc kubenswrapper[4748]: I0216 15:16:42.290161 4748 generic.go:334] "Generic (PLEG): container finished" podID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerID="b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41" exitCode=143 Feb 16 15:16:42 crc kubenswrapper[4748]: I0216 15:16:42.290231 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c7d6c0e-64e8-4bc9-8d79-64c126969605","Type":"ContainerDied","Data":"b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41"} Feb 16 15:16:42 crc kubenswrapper[4748]: I0216 15:16:42.292234 4748 generic.go:334] "Generic (PLEG): container finished" podID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerID="b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526" exitCode=143 Feb 16 15:16:42 crc kubenswrapper[4748]: I0216 15:16:42.292320 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4793ab6a-d06f-4141-be39-25d49d9cd99d","Type":"ContainerDied","Data":"b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526"} Feb 16 15:16:42 crc kubenswrapper[4748]: E0216 15:16:42.842905 4748 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873 is running failed: container process not found" containerID="ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:16:42 crc kubenswrapper[4748]: E0216 15:16:42.843489 4748 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873 is running failed: container process not found" containerID="ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:16:42 crc kubenswrapper[4748]: E0216 15:16:42.843875 4748 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873 is running failed: container process not found" containerID="ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 16 15:16:42 crc kubenswrapper[4748]: E0216 15:16:42.843904 4748 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6f0209e1-f503-4d61-8205-d0d56a2f754e" containerName="nova-scheduler-scheduler" Feb 16 15:16:42 crc kubenswrapper[4748]: I0216 15:16:42.891795 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.061492 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-config-data\") pod \"6f0209e1-f503-4d61-8205-d0d56a2f754e\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.062369 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-combined-ca-bundle\") pod \"6f0209e1-f503-4d61-8205-d0d56a2f754e\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.062409 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qxh6\" (UniqueName: \"kubernetes.io/projected/6f0209e1-f503-4d61-8205-d0d56a2f754e-kube-api-access-7qxh6\") pod \"6f0209e1-f503-4d61-8205-d0d56a2f754e\" (UID: \"6f0209e1-f503-4d61-8205-d0d56a2f754e\") " Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.069836 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f0209e1-f503-4d61-8205-d0d56a2f754e-kube-api-access-7qxh6" (OuterVolumeSpecName: "kube-api-access-7qxh6") pod "6f0209e1-f503-4d61-8205-d0d56a2f754e" (UID: "6f0209e1-f503-4d61-8205-d0d56a2f754e"). InnerVolumeSpecName "kube-api-access-7qxh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.096765 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f0209e1-f503-4d61-8205-d0d56a2f754e" (UID: "6f0209e1-f503-4d61-8205-d0d56a2f754e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.101827 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-config-data" (OuterVolumeSpecName: "config-data") pod "6f0209e1-f503-4d61-8205-d0d56a2f754e" (UID: "6f0209e1-f503-4d61-8205-d0d56a2f754e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.165076 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qxh6\" (UniqueName: \"kubernetes.io/projected/6f0209e1-f503-4d61-8205-d0d56a2f754e-kube-api-access-7qxh6\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.165104 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.165114 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f0209e1-f503-4d61-8205-d0d56a2f754e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.303789 4748 generic.go:334] "Generic (PLEG): container finished" podID="6f0209e1-f503-4d61-8205-d0d56a2f754e" containerID="ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" exitCode=0 Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.303828 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f0209e1-f503-4d61-8205-d0d56a2f754e","Type":"ContainerDied","Data":"ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873"} Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.303839 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.303857 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6f0209e1-f503-4d61-8205-d0d56a2f754e","Type":"ContainerDied","Data":"a0a81e1960c6d08dffc47db40c58d266517020109152446f20884a9ce504ffc0"} Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.303875 4748 scope.go:117] "RemoveContainer" containerID="ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.328202 4748 scope.go:117] "RemoveContainer" containerID="ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" Feb 16 15:16:43 crc kubenswrapper[4748]: E0216 15:16:43.329986 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873\": container with ID starting with ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873 not found: ID does not exist" containerID="ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.330018 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873"} err="failed to get container status \"ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873\": rpc error: code = NotFound desc = could not find container \"ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873\": container with ID starting with ace9f8613c0b323328b86134474034a4dfc484dc4c72df7044e7dcadb150c873 not found: ID does not exist" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.337776 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.351842 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.362773 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:16:43 crc kubenswrapper[4748]: E0216 15:16:43.363193 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5e00d08-e68a-479e-961b-38bc4e12b351" containerName="nova-manage" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.363212 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5e00d08-e68a-479e-961b-38bc4e12b351" containerName="nova-manage" Feb 16 15:16:43 crc kubenswrapper[4748]: E0216 15:16:43.363257 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f0209e1-f503-4d61-8205-d0d56a2f754e" containerName="nova-scheduler-scheduler" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.363264 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f0209e1-f503-4d61-8205-d0d56a2f754e" containerName="nova-scheduler-scheduler" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.363451 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5e00d08-e68a-479e-961b-38bc4e12b351" containerName="nova-manage" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.363469 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f0209e1-f503-4d61-8205-d0d56a2f754e" containerName="nova-scheduler-scheduler" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.364135 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.365880 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.374128 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.471194 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg2m9\" (UniqueName: \"kubernetes.io/projected/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-kube-api-access-sg2m9\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.471258 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.471317 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-config-data\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.572704 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.572825 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-config-data\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.572958 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg2m9\" (UniqueName: \"kubernetes.io/projected/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-kube-api-access-sg2m9\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.577156 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.586315 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-config-data\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.589957 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg2m9\" (UniqueName: \"kubernetes.io/projected/380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6-kube-api-access-sg2m9\") pod \"nova-scheduler-0\" (UID: \"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6\") " pod="openstack/nova-scheduler-0" Feb 16 15:16:43 crc kubenswrapper[4748]: I0216 15:16:43.721341 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 16 15:16:44 crc kubenswrapper[4748]: I0216 15:16:44.249861 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 16 15:16:44 crc kubenswrapper[4748]: I0216 15:16:44.317328 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6","Type":"ContainerStarted","Data":"12cd05818124321fb6fa01a93bb8df549ddc75306343278d4b6efc46e37fb198"} Feb 16 15:16:44 crc kubenswrapper[4748]: I0216 15:16:44.717397 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": read tcp 10.217.0.2:32844->10.217.0.215:8775: read: connection reset by peer" Feb 16 15:16:44 crc kubenswrapper[4748]: I0216 15:16:44.717407 4748 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.215:8775/\": read tcp 10.217.0.2:32842->10.217.0.215:8775: read: connection reset by peer" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.014180 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f0209e1-f503-4d61-8205-d0d56a2f754e" path="/var/lib/kubelet/pods/6f0209e1-f503-4d61-8205-d0d56a2f754e/volumes" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.266622 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.319987 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-combined-ca-bundle\") pod \"4793ab6a-d06f-4141-be39-25d49d9cd99d\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.320075 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-config-data\") pod \"4793ab6a-d06f-4141-be39-25d49d9cd99d\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.320121 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-nova-metadata-tls-certs\") pod \"4793ab6a-d06f-4141-be39-25d49d9cd99d\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.320158 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4793ab6a-d06f-4141-be39-25d49d9cd99d-logs\") pod \"4793ab6a-d06f-4141-be39-25d49d9cd99d\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.320353 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnjgr\" (UniqueName: \"kubernetes.io/projected/4793ab6a-d06f-4141-be39-25d49d9cd99d-kube-api-access-xnjgr\") pod \"4793ab6a-d06f-4141-be39-25d49d9cd99d\" (UID: \"4793ab6a-d06f-4141-be39-25d49d9cd99d\") " Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.321086 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4793ab6a-d06f-4141-be39-25d49d9cd99d-logs" (OuterVolumeSpecName: "logs") pod "4793ab6a-d06f-4141-be39-25d49d9cd99d" (UID: "4793ab6a-d06f-4141-be39-25d49d9cd99d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.350058 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4793ab6a-d06f-4141-be39-25d49d9cd99d-kube-api-access-xnjgr" (OuterVolumeSpecName: "kube-api-access-xnjgr") pod "4793ab6a-d06f-4141-be39-25d49d9cd99d" (UID: "4793ab6a-d06f-4141-be39-25d49d9cd99d"). InnerVolumeSpecName "kube-api-access-xnjgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.363375 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6","Type":"ContainerStarted","Data":"8ac73bf7ae0938ecfadde240b28ee13a86a5cd4ac1c109e66f602f27d2fccd64"} Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.365182 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4793ab6a-d06f-4141-be39-25d49d9cd99d" (UID: "4793ab6a-d06f-4141-be39-25d49d9cd99d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.369676 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-config-data" (OuterVolumeSpecName: "config-data") pod "4793ab6a-d06f-4141-be39-25d49d9cd99d" (UID: "4793ab6a-d06f-4141-be39-25d49d9cd99d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.375344 4748 generic.go:334] "Generic (PLEG): container finished" podID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerID="abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de" exitCode=0 Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.375394 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4793ab6a-d06f-4141-be39-25d49d9cd99d","Type":"ContainerDied","Data":"abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de"} Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.375420 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4793ab6a-d06f-4141-be39-25d49d9cd99d","Type":"ContainerDied","Data":"4f036ba3ae1845d89449f2e0359801b591aaa2c09be3a62339a9fd922ce19a49"} Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.375437 4748 scope.go:117] "RemoveContainer" containerID="abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.375556 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.388504 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.388487445 podStartE2EDuration="2.388487445s" podCreationTimestamp="2026-02-16 15:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:45.385189104 +0000 UTC m=+1431.076858163" watchObservedRunningTime="2026-02-16 15:16:45.388487445 +0000 UTC m=+1431.080156484" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.393881 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4793ab6a-d06f-4141-be39-25d49d9cd99d" (UID: "4793ab6a-d06f-4141-be39-25d49d9cd99d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.422855 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnjgr\" (UniqueName: \"kubernetes.io/projected/4793ab6a-d06f-4141-be39-25d49d9cd99d-kube-api-access-xnjgr\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.422887 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.422896 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.422905 4748 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4793ab6a-d06f-4141-be39-25d49d9cd99d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.422917 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4793ab6a-d06f-4141-be39-25d49d9cd99d-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.437881 4748 scope.go:117] "RemoveContainer" containerID="b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.479650 4748 scope.go:117] "RemoveContainer" containerID="abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de" Feb 16 15:16:45 crc kubenswrapper[4748]: E0216 15:16:45.481059 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de\": container with ID starting with abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de not found: ID does not exist" containerID="abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.481102 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de"} err="failed to get container status \"abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de\": rpc error: code = NotFound desc = could not find container \"abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de\": container with ID starting with abcac32dec6eef96c26b7a1a5b45a0a0d9a7224573f787055b20e16ae91f48de not found: ID does not exist" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.481145 4748 scope.go:117] "RemoveContainer" containerID="b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526" Feb 16 15:16:45 crc kubenswrapper[4748]: E0216 15:16:45.481582 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526\": container with ID starting with b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526 not found: ID does not exist" containerID="b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.481615 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526"} err="failed to get container status \"b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526\": rpc error: code = NotFound desc = could not find container \"b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526\": container with ID starting with b299358c4d1dcfdc269674799af5665289858b5d3bf1dcc97a26d3e004649526 not found: ID does not exist" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.706768 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.718890 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.736995 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:16:45 crc kubenswrapper[4748]: E0216 15:16:45.737551 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-log" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.737582 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-log" Feb 16 15:16:45 crc kubenswrapper[4748]: E0216 15:16:45.737606 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-metadata" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.737614 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-metadata" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.737999 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-log" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.738019 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" containerName="nova-metadata-metadata" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.739249 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.744099 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.744375 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.749322 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.831606 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-config-data\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.832064 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6490c748-242c-465e-a5ba-c44b9276c005-logs\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.832104 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.832172 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z4fn\" (UniqueName: \"kubernetes.io/projected/6490c748-242c-465e-a5ba-c44b9276c005-kube-api-access-5z4fn\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.832199 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.933765 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6490c748-242c-465e-a5ba-c44b9276c005-logs\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.933821 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.933886 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z4fn\" (UniqueName: \"kubernetes.io/projected/6490c748-242c-465e-a5ba-c44b9276c005-kube-api-access-5z4fn\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.933912 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.933942 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-config-data\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.934259 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6490c748-242c-465e-a5ba-c44b9276c005-logs\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.938635 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.939875 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.940051 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6490c748-242c-465e-a5ba-c44b9276c005-config-data\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:45 crc kubenswrapper[4748]: I0216 15:16:45.963555 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z4fn\" (UniqueName: \"kubernetes.io/projected/6490c748-242c-465e-a5ba-c44b9276c005-kube-api-access-5z4fn\") pod \"nova-metadata-0\" (UID: \"6490c748-242c-465e-a5ba-c44b9276c005\") " pod="openstack/nova-metadata-0" Feb 16 15:16:46 crc kubenswrapper[4748]: I0216 15:16:46.056387 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 16 15:16:46 crc kubenswrapper[4748]: I0216 15:16:46.538362 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.010502 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4793ab6a-d06f-4141-be39-25d49d9cd99d" path="/var/lib/kubelet/pods/4793ab6a-d06f-4141-be39-25d49d9cd99d/volumes" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.284259 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.344770 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5l4k9" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:16:47 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:16:47 crc kubenswrapper[4748]: > Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.367829 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-combined-ca-bundle\") pod \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.368561 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-public-tls-certs\") pod \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.368773 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7d6c0e-64e8-4bc9-8d79-64c126969605-logs\") pod \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.368800 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-internal-tls-certs\") pod \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.368860 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84sjf\" (UniqueName: \"kubernetes.io/projected/6c7d6c0e-64e8-4bc9-8d79-64c126969605-kube-api-access-84sjf\") pod \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.368932 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-config-data\") pod \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\" (UID: \"6c7d6c0e-64e8-4bc9-8d79-64c126969605\") " Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.370751 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7d6c0e-64e8-4bc9-8d79-64c126969605-logs" (OuterVolumeSpecName: "logs") pod "6c7d6c0e-64e8-4bc9-8d79-64c126969605" (UID: "6c7d6c0e-64e8-4bc9-8d79-64c126969605"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.374003 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7d6c0e-64e8-4bc9-8d79-64c126969605-kube-api-access-84sjf" (OuterVolumeSpecName: "kube-api-access-84sjf") pod "6c7d6c0e-64e8-4bc9-8d79-64c126969605" (UID: "6c7d6c0e-64e8-4bc9-8d79-64c126969605"). InnerVolumeSpecName "kube-api-access-84sjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.413547 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c7d6c0e-64e8-4bc9-8d79-64c126969605" (UID: "6c7d6c0e-64e8-4bc9-8d79-64c126969605"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.414608 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-config-data" (OuterVolumeSpecName: "config-data") pod "6c7d6c0e-64e8-4bc9-8d79-64c126969605" (UID: "6c7d6c0e-64e8-4bc9-8d79-64c126969605"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.417150 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6490c748-242c-465e-a5ba-c44b9276c005","Type":"ContainerStarted","Data":"93c4967c2af4d6d6e51a4669732b495d2ce1d12c0fc48f068333a962df666ef7"} Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.417200 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6490c748-242c-465e-a5ba-c44b9276c005","Type":"ContainerStarted","Data":"5db2b19d364b5a7ac86824444a71dba381df1076982393e524f9eebe52246d41"} Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.417213 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6490c748-242c-465e-a5ba-c44b9276c005","Type":"ContainerStarted","Data":"71cc373684648b56ecce59f8807bc72448377c5e0d21cd9a76ffa9a8027b3867"} Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.419541 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6c7d6c0e-64e8-4bc9-8d79-64c126969605" (UID: "6c7d6c0e-64e8-4bc9-8d79-64c126969605"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.419677 4748 generic.go:334] "Generic (PLEG): container finished" podID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerID="8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e" exitCode=0 Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.419853 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.419707 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c7d6c0e-64e8-4bc9-8d79-64c126969605","Type":"ContainerDied","Data":"8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e"} Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.420307 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"6c7d6c0e-64e8-4bc9-8d79-64c126969605","Type":"ContainerDied","Data":"84667fdd8b76bd3cf0e88e91496b51fd5d19d299d06d0ef9efbe6e3d041bc4e9"} Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.420319 4748 scope.go:117] "RemoveContainer" containerID="8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.444469 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.444451582 podStartE2EDuration="2.444451582s" podCreationTimestamp="2026-02-16 15:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:47.436793543 +0000 UTC m=+1433.128462592" watchObservedRunningTime="2026-02-16 15:16:47.444451582 +0000 UTC m=+1433.136120621" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.460182 4748 scope.go:117] "RemoveContainer" containerID="b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.467829 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6c7d6c0e-64e8-4bc9-8d79-64c126969605" (UID: "6c7d6c0e-64e8-4bc9-8d79-64c126969605"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.471251 4748 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c7d6c0e-64e8-4bc9-8d79-64c126969605-logs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.471282 4748 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.471293 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84sjf\" (UniqueName: \"kubernetes.io/projected/6c7d6c0e-64e8-4bc9-8d79-64c126969605-kube-api-access-84sjf\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.471303 4748 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-config-data\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.471312 4748 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.471321 4748 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c7d6c0e-64e8-4bc9-8d79-64c126969605-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.480464 4748 scope.go:117] "RemoveContainer" containerID="8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e" Feb 16 15:16:47 crc kubenswrapper[4748]: E0216 15:16:47.481023 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e\": container with ID starting with 8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e not found: ID does not exist" containerID="8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.481072 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e"} err="failed to get container status \"8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e\": rpc error: code = NotFound desc = could not find container \"8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e\": container with ID starting with 8faca69d9494b630062917af03da3ef7ddcf44e85ba1efbe6b431a2ef249467e not found: ID does not exist" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.481097 4748 scope.go:117] "RemoveContainer" containerID="b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41" Feb 16 15:16:47 crc kubenswrapper[4748]: E0216 15:16:47.481472 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41\": container with ID starting with b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41 not found: ID does not exist" containerID="b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.481506 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41"} err="failed to get container status \"b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41\": rpc error: code = NotFound desc = could not find container \"b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41\": container with ID starting with b0afe32dd852b7a086f4ae5f812214aae1c3d79daa6f32cc15f79d9b93564e41 not found: ID does not exist" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.753372 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.765909 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.776648 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:47 crc kubenswrapper[4748]: E0216 15:16:47.777080 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-log" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.777097 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-log" Feb 16 15:16:47 crc kubenswrapper[4748]: E0216 15:16:47.777118 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-api" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.777125 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-api" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.777319 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-api" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.777341 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" containerName="nova-api-log" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.778369 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.781128 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.781682 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.783939 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.793053 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.878862 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tskz8\" (UniqueName: \"kubernetes.io/projected/ee2a1e55-f629-46db-872b-db3f1baee84a-kube-api-access-tskz8\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.879011 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.879061 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.879157 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee2a1e55-f629-46db-872b-db3f1baee84a-logs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.879208 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-config-data\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.879229 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.981389 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee2a1e55-f629-46db-872b-db3f1baee84a-logs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.981470 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-config-data\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.981495 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.981579 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tskz8\" (UniqueName: \"kubernetes.io/projected/ee2a1e55-f629-46db-872b-db3f1baee84a-kube-api-access-tskz8\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.981680 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.981741 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.982026 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee2a1e55-f629-46db-872b-db3f1baee84a-logs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.985540 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-config-data\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.986168 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.989339 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-public-tls-certs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: I0216 15:16:47.993766 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ee2a1e55-f629-46db-872b-db3f1baee84a-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:47 crc kubenswrapper[4748]: E0216 15:16:47.996126 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:16:48 crc kubenswrapper[4748]: I0216 15:16:48.001159 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tskz8\" (UniqueName: \"kubernetes.io/projected/ee2a1e55-f629-46db-872b-db3f1baee84a-kube-api-access-tskz8\") pod \"nova-api-0\" (UID: \"ee2a1e55-f629-46db-872b-db3f1baee84a\") " pod="openstack/nova-api-0" Feb 16 15:16:48 crc kubenswrapper[4748]: I0216 15:16:48.096537 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 16 15:16:48 crc kubenswrapper[4748]: I0216 15:16:48.579630 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 16 15:16:48 crc kubenswrapper[4748]: W0216 15:16:48.587938 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee2a1e55_f629_46db_872b_db3f1baee84a.slice/crio-1f5f0f27e96c7ee63884dc2637fe9a3fc656a9d23cf9ae2e68cd0616398819d2 WatchSource:0}: Error finding container 1f5f0f27e96c7ee63884dc2637fe9a3fc656a9d23cf9ae2e68cd0616398819d2: Status 404 returned error can't find the container with id 1f5f0f27e96c7ee63884dc2637fe9a3fc656a9d23cf9ae2e68cd0616398819d2 Feb 16 15:16:48 crc kubenswrapper[4748]: I0216 15:16:48.722567 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 16 15:16:49 crc kubenswrapper[4748]: I0216 15:16:49.018275 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7d6c0e-64e8-4bc9-8d79-64c126969605" path="/var/lib/kubelet/pods/6c7d6c0e-64e8-4bc9-8d79-64c126969605/volumes" Feb 16 15:16:49 crc kubenswrapper[4748]: I0216 15:16:49.444184 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee2a1e55-f629-46db-872b-db3f1baee84a","Type":"ContainerStarted","Data":"7fdd180713ce7a90841cdf597c95039e44f5ae8cbf4da4242ef8d1b9cb4dcad9"} Feb 16 15:16:49 crc kubenswrapper[4748]: I0216 15:16:49.444244 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee2a1e55-f629-46db-872b-db3f1baee84a","Type":"ContainerStarted","Data":"9edebf389d93372544e4ece710f0c2a46af58d81ad29f13f36d0e4d2b15d2b70"} Feb 16 15:16:49 crc kubenswrapper[4748]: I0216 15:16:49.444265 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ee2a1e55-f629-46db-872b-db3f1baee84a","Type":"ContainerStarted","Data":"1f5f0f27e96c7ee63884dc2637fe9a3fc656a9d23cf9ae2e68cd0616398819d2"} Feb 16 15:16:49 crc kubenswrapper[4748]: I0216 15:16:49.473098 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.473081427 podStartE2EDuration="2.473081427s" podCreationTimestamp="2026-02-16 15:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:16:49.470411602 +0000 UTC m=+1435.162080641" watchObservedRunningTime="2026-02-16 15:16:49.473081427 +0000 UTC m=+1435.164750456" Feb 16 15:16:51 crc kubenswrapper[4748]: I0216 15:16:51.056471 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:16:51 crc kubenswrapper[4748]: I0216 15:16:51.057098 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 16 15:16:53 crc kubenswrapper[4748]: I0216 15:16:53.722396 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 16 15:16:53 crc kubenswrapper[4748]: I0216 15:16:53.750569 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 16 15:16:54 crc kubenswrapper[4748]: I0216 15:16:54.551575 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 16 15:16:56 crc kubenswrapper[4748]: I0216 15:16:56.057130 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:16:56 crc kubenswrapper[4748]: I0216 15:16:56.057387 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 16 15:16:56 crc kubenswrapper[4748]: I0216 15:16:56.343999 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:56 crc kubenswrapper[4748]: I0216 15:16:56.413278 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:56 crc kubenswrapper[4748]: I0216 15:16:56.586626 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5l4k9"] Feb 16 15:16:57 crc kubenswrapper[4748]: I0216 15:16:57.071927 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6490c748-242c-465e-a5ba-c44b9276c005" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:57 crc kubenswrapper[4748]: I0216 15:16:57.071938 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="6490c748-242c-465e-a5ba-c44b9276c005" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.229:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:57 crc kubenswrapper[4748]: I0216 15:16:57.523184 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5l4k9" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" containerID="cri-o://455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786" gracePeriod=2 Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.014445 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.097653 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.097725 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.122696 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-catalog-content\") pod \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.122858 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7n5gj\" (UniqueName: \"kubernetes.io/projected/052d4b89-506b-4911-9f30-30ffb4b0e7b3-kube-api-access-7n5gj\") pod \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.123001 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-utilities\") pod \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\" (UID: \"052d4b89-506b-4911-9f30-30ffb4b0e7b3\") " Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.131114 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-utilities" (OuterVolumeSpecName: "utilities") pod "052d4b89-506b-4911-9f30-30ffb4b0e7b3" (UID: "052d4b89-506b-4911-9f30-30ffb4b0e7b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.138506 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.144898 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/052d4b89-506b-4911-9f30-30ffb4b0e7b3-kube-api-access-7n5gj" (OuterVolumeSpecName: "kube-api-access-7n5gj") pod "052d4b89-506b-4911-9f30-30ffb4b0e7b3" (UID: "052d4b89-506b-4911-9f30-30ffb4b0e7b3"). InnerVolumeSpecName "kube-api-access-7n5gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.240486 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7n5gj\" (UniqueName: \"kubernetes.io/projected/052d4b89-506b-4911-9f30-30ffb4b0e7b3-kube-api-access-7n5gj\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.278208 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "052d4b89-506b-4911-9f30-30ffb4b0e7b3" (UID: "052d4b89-506b-4911-9f30-30ffb4b0e7b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.341991 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/052d4b89-506b-4911-9f30-30ffb4b0e7b3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.541284 4748 generic.go:334] "Generic (PLEG): container finished" podID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerID="455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786" exitCode=0 Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.541376 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l4k9" event={"ID":"052d4b89-506b-4911-9f30-30ffb4b0e7b3","Type":"ContainerDied","Data":"455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786"} Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.541474 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5l4k9" event={"ID":"052d4b89-506b-4911-9f30-30ffb4b0e7b3","Type":"ContainerDied","Data":"49090529106ed0e3331424914f1790945f2fed75da4587141a6fdfb9fb146bed"} Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.541510 4748 scope.go:117] "RemoveContainer" containerID="455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.541513 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5l4k9" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.586978 4748 scope.go:117] "RemoveContainer" containerID="6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.606808 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5l4k9"] Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.626049 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5l4k9"] Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.635955 4748 scope.go:117] "RemoveContainer" containerID="9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.675486 4748 scope.go:117] "RemoveContainer" containerID="455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786" Feb 16 15:16:58 crc kubenswrapper[4748]: E0216 15:16:58.676132 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786\": container with ID starting with 455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786 not found: ID does not exist" containerID="455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.676200 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786"} err="failed to get container status \"455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786\": rpc error: code = NotFound desc = could not find container \"455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786\": container with ID starting with 455841297af1a4da0df1a7dd28f589a50f3c9c7fa48e3ab29439989f3bda8786 not found: ID does not exist" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.676241 4748 scope.go:117] "RemoveContainer" containerID="6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c" Feb 16 15:16:58 crc kubenswrapper[4748]: E0216 15:16:58.677579 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c\": container with ID starting with 6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c not found: ID does not exist" containerID="6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.677631 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c"} err="failed to get container status \"6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c\": rpc error: code = NotFound desc = could not find container \"6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c\": container with ID starting with 6c3e4a11a3c5f665bb406f50aa172a1d996c76d6eedda00e41ae23c3fdb42e7c not found: ID does not exist" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.677664 4748 scope.go:117] "RemoveContainer" containerID="9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647" Feb 16 15:16:58 crc kubenswrapper[4748]: E0216 15:16:58.679057 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647\": container with ID starting with 9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647 not found: ID does not exist" containerID="9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647" Feb 16 15:16:58 crc kubenswrapper[4748]: I0216 15:16:58.679080 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647"} err="failed to get container status \"9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647\": rpc error: code = NotFound desc = could not find container \"9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647\": container with ID starting with 9701b6291ef72fe351f7c7fdc0c9787db84b11faa0f231949bf98da609489647 not found: ID does not exist" Feb 16 15:16:59 crc kubenswrapper[4748]: I0216 15:16:59.006354 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" path="/var/lib/kubelet/pods/052d4b89-506b-4911-9f30-30ffb4b0e7b3/volumes" Feb 16 15:16:59 crc kubenswrapper[4748]: I0216 15:16:59.131202 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee2a1e55-f629-46db-872b-db3f1baee84a" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:16:59 crc kubenswrapper[4748]: I0216 15:16:59.131206 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ee2a1e55-f629-46db-872b-db3f1baee84a" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.230:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 16 15:17:01 crc kubenswrapper[4748]: E0216 15:17:01.996343 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:17:04 crc kubenswrapper[4748]: I0216 15:17:04.729345 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:17:04 crc kubenswrapper[4748]: I0216 15:17:04.729913 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:17:04 crc kubenswrapper[4748]: I0216 15:17:04.730047 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:17:04 crc kubenswrapper[4748]: I0216 15:17:04.731352 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f07fa92f05df6cf447a32d8da407b2cd8f537fec4af797ea35626d400c475838"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:17:04 crc kubenswrapper[4748]: I0216 15:17:04.731467 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://f07fa92f05df6cf447a32d8da407b2cd8f537fec4af797ea35626d400c475838" gracePeriod=600 Feb 16 15:17:05 crc kubenswrapper[4748]: I0216 15:17:05.618687 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="f07fa92f05df6cf447a32d8da407b2cd8f537fec4af797ea35626d400c475838" exitCode=0 Feb 16 15:17:05 crc kubenswrapper[4748]: I0216 15:17:05.618819 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"f07fa92f05df6cf447a32d8da407b2cd8f537fec4af797ea35626d400c475838"} Feb 16 15:17:05 crc kubenswrapper[4748]: I0216 15:17:05.619299 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2"} Feb 16 15:17:05 crc kubenswrapper[4748]: I0216 15:17:05.619333 4748 scope.go:117] "RemoveContainer" containerID="4af7b39f84ae1089f7c8b9340185a28b394a9429bf33b6cefed9e396e13808b9" Feb 16 15:17:06 crc kubenswrapper[4748]: I0216 15:17:06.061985 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:17:06 crc kubenswrapper[4748]: I0216 15:17:06.064094 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 16 15:17:06 crc kubenswrapper[4748]: I0216 15:17:06.066649 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:17:06 crc kubenswrapper[4748]: I0216 15:17:06.643924 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 16 15:17:06 crc kubenswrapper[4748]: I0216 15:17:06.727143 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 16 15:17:08 crc kubenswrapper[4748]: I0216 15:17:08.108612 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:17:08 crc kubenswrapper[4748]: I0216 15:17:08.109696 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:17:08 crc kubenswrapper[4748]: I0216 15:17:08.110203 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 16 15:17:08 crc kubenswrapper[4748]: I0216 15:17:08.121640 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:17:08 crc kubenswrapper[4748]: I0216 15:17:08.661680 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 16 15:17:08 crc kubenswrapper[4748]: I0216 15:17:08.667767 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 16 15:17:16 crc kubenswrapper[4748]: E0216 15:17:16.996774 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:17:30 crc kubenswrapper[4748]: E0216 15:17:30.997111 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:17:45 crc kubenswrapper[4748]: E0216 15:17:45.005117 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:18:00 crc kubenswrapper[4748]: E0216 15:17:59.999785 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:18:13 crc kubenswrapper[4748]: E0216 15:18:13.997775 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:18:25 crc kubenswrapper[4748]: E0216 15:18:25.003065 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:18:37 crc kubenswrapper[4748]: E0216 15:18:37.996338 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:18:49 crc kubenswrapper[4748]: E0216 15:18:49.996682 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:19:01 crc kubenswrapper[4748]: E0216 15:19:00.999748 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:19:12 crc kubenswrapper[4748]: E0216 15:19:12.129019 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:19:12 crc kubenswrapper[4748]: E0216 15:19:12.129704 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:19:12 crc kubenswrapper[4748]: E0216 15:19:12.129972 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:19:12 crc kubenswrapper[4748]: E0216 15:19:12.131299 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:19:18 crc kubenswrapper[4748]: I0216 15:19:18.572414 4748 scope.go:117] "RemoveContainer" containerID="6a28fc7a1fd9d13104051767db0a555f0c012e2f00e0a7f482c728bd59fb30af" Feb 16 15:19:18 crc kubenswrapper[4748]: I0216 15:19:18.607903 4748 scope.go:117] "RemoveContainer" containerID="ee1a1deca949716e29ed0e0eec7f31924e160960019e8075e6883ef2166dbf28" Feb 16 15:19:25 crc kubenswrapper[4748]: E0216 15:19:25.014752 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:19:34 crc kubenswrapper[4748]: I0216 15:19:34.729376 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:19:34 crc kubenswrapper[4748]: I0216 15:19:34.732243 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:19:38 crc kubenswrapper[4748]: E0216 15:19:38.000436 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:19:51 crc kubenswrapper[4748]: E0216 15:19:51.998332 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:20:03 crc kubenswrapper[4748]: E0216 15:20:03.998023 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:20:04 crc kubenswrapper[4748]: I0216 15:20:04.729448 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:20:04 crc kubenswrapper[4748]: I0216 15:20:04.729517 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:20:15 crc kubenswrapper[4748]: E0216 15:20:15.998630 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:20:18 crc kubenswrapper[4748]: I0216 15:20:18.752104 4748 scope.go:117] "RemoveContainer" containerID="d68ad46b87e23d3c1eff5c4239a24666e3ca74d9b7f5136f744160546adb6788" Feb 16 15:20:29 crc kubenswrapper[4748]: E0216 15:20:29.996865 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:20:34 crc kubenswrapper[4748]: I0216 15:20:34.729782 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:20:34 crc kubenswrapper[4748]: I0216 15:20:34.730379 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:20:34 crc kubenswrapper[4748]: I0216 15:20:34.730446 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:20:34 crc kubenswrapper[4748]: I0216 15:20:34.731593 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:20:34 crc kubenswrapper[4748]: I0216 15:20:34.731694 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" gracePeriod=600 Feb 16 15:20:34 crc kubenswrapper[4748]: E0216 15:20:34.860023 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:20:35 crc kubenswrapper[4748]: I0216 15:20:35.312232 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" exitCode=0 Feb 16 15:20:35 crc kubenswrapper[4748]: I0216 15:20:35.312298 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2"} Feb 16 15:20:35 crc kubenswrapper[4748]: I0216 15:20:35.312359 4748 scope.go:117] "RemoveContainer" containerID="f07fa92f05df6cf447a32d8da407b2cd8f537fec4af797ea35626d400c475838" Feb 16 15:20:35 crc kubenswrapper[4748]: I0216 15:20:35.313202 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:20:35 crc kubenswrapper[4748]: E0216 15:20:35.313753 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:20:42 crc kubenswrapper[4748]: E0216 15:20:42.998001 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:20:46 crc kubenswrapper[4748]: I0216 15:20:46.994407 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:20:46 crc kubenswrapper[4748]: E0216 15:20:46.995051 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:20:57 crc kubenswrapper[4748]: E0216 15:20:57.996334 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:20:59 crc kubenswrapper[4748]: I0216 15:20:59.995103 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:20:59 crc kubenswrapper[4748]: E0216 15:20:59.996196 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:21:10 crc kubenswrapper[4748]: E0216 15:21:10.997894 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:21:15 crc kubenswrapper[4748]: I0216 15:21:15.008997 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:21:15 crc kubenswrapper[4748]: E0216 15:21:15.014001 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:21:18 crc kubenswrapper[4748]: I0216 15:21:18.877302 4748 scope.go:117] "RemoveContainer" containerID="a3abffd4599c49844f84e2b5f01b2024c7775add7e218dae603c91ca42c667d8" Feb 16 15:21:18 crc kubenswrapper[4748]: I0216 15:21:18.910923 4748 scope.go:117] "RemoveContainer" containerID="e503f427cf2edd61b5dd8f9459330d5e971c281b3e2fe00c292b9136fbcf8fa6" Feb 16 15:21:23 crc kubenswrapper[4748]: E0216 15:21:23.996935 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:21:27 crc kubenswrapper[4748]: I0216 15:21:27.995307 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:21:27 crc kubenswrapper[4748]: E0216 15:21:27.996418 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:21:36 crc kubenswrapper[4748]: E0216 15:21:36.997955 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:21:43 crc kubenswrapper[4748]: I0216 15:21:43.000999 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:21:43 crc kubenswrapper[4748]: E0216 15:21:43.002445 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:21:47 crc kubenswrapper[4748]: E0216 15:21:47.997493 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:21:55 crc kubenswrapper[4748]: I0216 15:21:55.003755 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:21:55 crc kubenswrapper[4748]: E0216 15:21:55.005749 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:21:58 crc kubenswrapper[4748]: E0216 15:21:58.996552 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:22:07 crc kubenswrapper[4748]: I0216 15:22:07.994826 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:22:07 crc kubenswrapper[4748]: E0216 15:22:07.995862 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:22:13 crc kubenswrapper[4748]: E0216 15:22:13.997733 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:22:18 crc kubenswrapper[4748]: I0216 15:22:18.986368 4748 scope.go:117] "RemoveContainer" containerID="391797343a30e5f1af7c3cc18880f7104bcdcdfdcd5946f41a0e1639f9b87f02" Feb 16 15:22:19 crc kubenswrapper[4748]: I0216 15:22:19.014308 4748 scope.go:117] "RemoveContainer" containerID="e67ff66d8217ca39b545f0d490aa22c3c9be52e303db8c81b1732a962a5dded9" Feb 16 15:22:19 crc kubenswrapper[4748]: I0216 15:22:19.054017 4748 scope.go:117] "RemoveContainer" containerID="2868a699845efa013f08c2bbced70e222c9c4fd131c940b9b2ba9c9f0803e3e6" Feb 16 15:22:19 crc kubenswrapper[4748]: I0216 15:22:19.080824 4748 scope.go:117] "RemoveContainer" containerID="0b54e2c3f99e71510b5e29ec3d639cf43bb11d9d2cd6b803ed760fcf3601ed8f" Feb 16 15:22:20 crc kubenswrapper[4748]: I0216 15:22:20.994859 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:22:20 crc kubenswrapper[4748]: E0216 15:22:20.995708 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:22:28 crc kubenswrapper[4748]: E0216 15:22:28.000055 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:22:35 crc kubenswrapper[4748]: I0216 15:22:35.011172 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:22:35 crc kubenswrapper[4748]: E0216 15:22:35.012562 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:22:40 crc kubenswrapper[4748]: E0216 15:22:40.001145 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:22:45 crc kubenswrapper[4748]: I0216 15:22:45.059553 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7024-account-create-update-k45xs"] Feb 16 15:22:45 crc kubenswrapper[4748]: I0216 15:22:45.070876 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-kcdl5"] Feb 16 15:22:45 crc kubenswrapper[4748]: I0216 15:22:45.079167 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7024-account-create-update-k45xs"] Feb 16 15:22:45 crc kubenswrapper[4748]: I0216 15:22:45.103205 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-kcdl5"] Feb 16 15:22:46 crc kubenswrapper[4748]: I0216 15:22:46.995214 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:22:46 crc kubenswrapper[4748]: E0216 15:22:46.995555 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:22:47 crc kubenswrapper[4748]: I0216 15:22:47.008351 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fa94068-22b9-4a72-9ad0-66b48c0487bf" path="/var/lib/kubelet/pods/3fa94068-22b9-4a72-9ad0-66b48c0487bf/volumes" Feb 16 15:22:47 crc kubenswrapper[4748]: I0216 15:22:47.010109 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f893b0c9-16a3-438e-9b07-6043dece0637" path="/var/lib/kubelet/pods/f893b0c9-16a3-438e-9b07-6043dece0637/volumes" Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.056582 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-8e95-account-create-update-zcbxs"] Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.075361 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-thj9q"] Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.087486 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gt8g9"] Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.098395 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ec90-account-create-update-pqx92"] Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.106580 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-8e95-account-create-update-zcbxs"] Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.115349 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ec90-account-create-update-pqx92"] Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.123326 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gt8g9"] Feb 16 15:22:49 crc kubenswrapper[4748]: I0216 15:22:49.131118 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-thj9q"] Feb 16 15:22:51 crc kubenswrapper[4748]: I0216 15:22:51.009155 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="364079ae-4285-4259-8cc2-59fe99051ee9" path="/var/lib/kubelet/pods/364079ae-4285-4259-8cc2-59fe99051ee9/volumes" Feb 16 15:22:51 crc kubenswrapper[4748]: I0216 15:22:51.010232 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46539f2e-bcce-4a2e-b62d-fea1cf34f2eb" path="/var/lib/kubelet/pods/46539f2e-bcce-4a2e-b62d-fea1cf34f2eb/volumes" Feb 16 15:22:51 crc kubenswrapper[4748]: I0216 15:22:51.010806 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fa9e51e-99aa-4a87-be9a-6804a4bb3259" path="/var/lib/kubelet/pods/4fa9e51e-99aa-4a87-be9a-6804a4bb3259/volumes" Feb 16 15:22:51 crc kubenswrapper[4748]: I0216 15:22:51.011338 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cae5ce95-90b8-45f3-90a6-08b958802299" path="/var/lib/kubelet/pods/cae5ce95-90b8-45f3-90a6-08b958802299/volumes" Feb 16 15:22:53 crc kubenswrapper[4748]: E0216 15:22:53.998233 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:22:58 crc kubenswrapper[4748]: I0216 15:22:58.994598 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:22:58 crc kubenswrapper[4748]: E0216 15:22:58.995612 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:23:01 crc kubenswrapper[4748]: I0216 15:23:01.062994 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qxd97"] Feb 16 15:23:01 crc kubenswrapper[4748]: I0216 15:23:01.080536 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qxd97"] Feb 16 15:23:03 crc kubenswrapper[4748]: I0216 15:23:03.021802 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef384d40-8b76-4b89-ada6-33dc0ff2e6fe" path="/var/lib/kubelet/pods/ef384d40-8b76-4b89-ada6-33dc0ff2e6fe/volumes" Feb 16 15:23:08 crc kubenswrapper[4748]: E0216 15:23:08.997266 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:23:10 crc kubenswrapper[4748]: I0216 15:23:10.993863 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:23:10 crc kubenswrapper[4748]: E0216 15:23:10.994452 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:23:19 crc kubenswrapper[4748]: I0216 15:23:19.180442 4748 scope.go:117] "RemoveContainer" containerID="60f9dc1b2e6eaa913acad2de5928e160bc0a842650f2085796d9ae296f3446d4" Feb 16 15:23:19 crc kubenswrapper[4748]: I0216 15:23:19.202708 4748 scope.go:117] "RemoveContainer" containerID="a37ff9ea47a10cd8c14c0c8fcee61b78f42d2668a283d6d23dce4a36b7705c44" Feb 16 15:23:19 crc kubenswrapper[4748]: I0216 15:23:19.262503 4748 scope.go:117] "RemoveContainer" containerID="12407b8b22c7c3dfc26eec8b5b583ac2768d42c5fe19d45fd1021ec0336e4ae5" Feb 16 15:23:19 crc kubenswrapper[4748]: I0216 15:23:19.337490 4748 scope.go:117] "RemoveContainer" containerID="028e8ce4ce984627df86a3f3d2187df9ae87bcbf58657e6f3f5978e38039cbda" Feb 16 15:23:19 crc kubenswrapper[4748]: I0216 15:23:19.382492 4748 scope.go:117] "RemoveContainer" containerID="934d29656fe2d31b40568f6621aee3dcbd434c869781df06f04eef2850d213d4" Feb 16 15:23:19 crc kubenswrapper[4748]: I0216 15:23:19.432116 4748 scope.go:117] "RemoveContainer" containerID="d219dbce367c55a26022e9a9e79570e1c4799d70af9d9c503989543cc41ca993" Feb 16 15:23:19 crc kubenswrapper[4748]: I0216 15:23:19.497027 4748 scope.go:117] "RemoveContainer" containerID="1287f70c00a10ca1f2e888e95057f7787da63b1159c9529783addd1ba94f678a" Feb 16 15:23:21 crc kubenswrapper[4748]: E0216 15:23:21.998822 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.088429 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-t2thf"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.117093 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-22bvq"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.126225 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-59h2b"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.135871 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-34d8-account-create-update-gcgxs"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.149017 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-22bvq"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.157816 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-59h2b"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.165763 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-t2thf"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.179762 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-34d8-account-create-update-gcgxs"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.188830 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-6b27-account-create-update-nvlxz"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.205201 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-6b27-account-create-update-nvlxz"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.215204 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xjt7m"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.224270 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-1694-account-create-update-vsgkb"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.231452 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-ebaf-account-create-update-vxgmw"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.238586 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-ebaf-account-create-update-vxgmw"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.246071 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xjt7m"] Feb 16 15:23:23 crc kubenswrapper[4748]: I0216 15:23:23.254261 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-1694-account-create-update-vsgkb"] Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.017899 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16626919-d38e-4cb5-8661-1b8e78e3967f" path="/var/lib/kubelet/pods/16626919-d38e-4cb5-8661-1b8e78e3967f/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.019330 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d1263a5-3517-4ba9-ad62-f35c7a6220c6" path="/var/lib/kubelet/pods/1d1263a5-3517-4ba9-ad62-f35c7a6220c6/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.020401 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24a0bc3e-b389-4f0a-8a31-40869b3a3e77" path="/var/lib/kubelet/pods/24a0bc3e-b389-4f0a-8a31-40869b3a3e77/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.021570 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3" path="/var/lib/kubelet/pods/8dbd2647-3bde-4d8c-ae80-0b7b8a99b2d3/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.023535 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1df7ac4-4495-45a5-bf86-2dda09e6f9b1" path="/var/lib/kubelet/pods/b1df7ac4-4495-45a5-bf86-2dda09e6f9b1/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.024625 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a" path="/var/lib/kubelet/pods/dbb267be-bdd9-4369-bc4f-b31ffd6ecc0a/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.025797 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc357533-40d6-4300-982f-dcfc7f6219db" path="/var/lib/kubelet/pods/dc357533-40d6-4300-982f-dcfc7f6219db/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.027755 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e25a5cff-5c50-4edf-a118-08d53e0f1cdf" path="/var/lib/kubelet/pods/e25a5cff-5c50-4edf-a118-08d53e0f1cdf/volumes" Feb 16 15:23:25 crc kubenswrapper[4748]: I0216 15:23:25.994737 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:23:25 crc kubenswrapper[4748]: E0216 15:23:25.995097 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:23:27 crc kubenswrapper[4748]: I0216 15:23:27.035785 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-b2zpx"] Feb 16 15:23:27 crc kubenswrapper[4748]: I0216 15:23:27.049089 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-b2zpx"] Feb 16 15:23:28 crc kubenswrapper[4748]: I0216 15:23:28.040460 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-dz5pf"] Feb 16 15:23:28 crc kubenswrapper[4748]: I0216 15:23:28.055915 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-dz5pf"] Feb 16 15:23:29 crc kubenswrapper[4748]: I0216 15:23:29.011681 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87ed93e3-0fa2-48ec-a2e8-2e371daaa93e" path="/var/lib/kubelet/pods/87ed93e3-0fa2-48ec-a2e8-2e371daaa93e/volumes" Feb 16 15:23:29 crc kubenswrapper[4748]: I0216 15:23:29.013079 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe" path="/var/lib/kubelet/pods/a141b5bf-1ed2-4a32-b7e3-7a6f3bf849fe/volumes" Feb 16 15:23:33 crc kubenswrapper[4748]: E0216 15:23:33.000532 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:23:38 crc kubenswrapper[4748]: I0216 15:23:38.995153 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:23:38 crc kubenswrapper[4748]: E0216 15:23:38.996160 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:23:47 crc kubenswrapper[4748]: E0216 15:23:47.996914 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:23:50 crc kubenswrapper[4748]: I0216 15:23:50.995270 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:23:50 crc kubenswrapper[4748]: E0216 15:23:50.996320 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.316035 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s8g2h"] Feb 16 15:23:53 crc kubenswrapper[4748]: E0216 15:23:53.317048 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.317073 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" Feb 16 15:23:53 crc kubenswrapper[4748]: E0216 15:23:53.317094 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="extract-utilities" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.317105 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="extract-utilities" Feb 16 15:23:53 crc kubenswrapper[4748]: E0216 15:23:53.317137 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="extract-content" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.317148 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="extract-content" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.317477 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="052d4b89-506b-4911-9f30-30ffb4b0e7b3" containerName="registry-server" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.321101 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.338378 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8g2h"] Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.436055 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-utilities\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.436368 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-catalog-content\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.436428 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stbzq\" (UniqueName: \"kubernetes.io/projected/8824dd47-af05-4b9b-85b0-668910ac051b-kube-api-access-stbzq\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.538679 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-catalog-content\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.538807 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stbzq\" (UniqueName: \"kubernetes.io/projected/8824dd47-af05-4b9b-85b0-668910ac051b-kube-api-access-stbzq\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.538953 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-utilities\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.539237 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-catalog-content\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.539313 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-utilities\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.572828 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stbzq\" (UniqueName: \"kubernetes.io/projected/8824dd47-af05-4b9b-85b0-668910ac051b-kube-api-access-stbzq\") pod \"redhat-marketplace-s8g2h\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:53 crc kubenswrapper[4748]: I0216 15:23:53.650953 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:23:54 crc kubenswrapper[4748]: I0216 15:23:54.149272 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8g2h"] Feb 16 15:23:54 crc kubenswrapper[4748]: I0216 15:23:54.756453 4748 generic.go:334] "Generic (PLEG): container finished" podID="8824dd47-af05-4b9b-85b0-668910ac051b" containerID="69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6" exitCode=0 Feb 16 15:23:54 crc kubenswrapper[4748]: I0216 15:23:54.756559 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8g2h" event={"ID":"8824dd47-af05-4b9b-85b0-668910ac051b","Type":"ContainerDied","Data":"69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6"} Feb 16 15:23:54 crc kubenswrapper[4748]: I0216 15:23:54.757181 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8g2h" event={"ID":"8824dd47-af05-4b9b-85b0-668910ac051b","Type":"ContainerStarted","Data":"307b1f0dd5adf65d6a57ecc297477c6eff08210b0325d68a40b5088056300688"} Feb 16 15:23:54 crc kubenswrapper[4748]: I0216 15:23:54.760344 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:23:55 crc kubenswrapper[4748]: I0216 15:23:55.769677 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8g2h" event={"ID":"8824dd47-af05-4b9b-85b0-668910ac051b","Type":"ContainerStarted","Data":"6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5"} Feb 16 15:23:56 crc kubenswrapper[4748]: I0216 15:23:56.796304 4748 generic.go:334] "Generic (PLEG): container finished" podID="8824dd47-af05-4b9b-85b0-668910ac051b" containerID="6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5" exitCode=0 Feb 16 15:23:56 crc kubenswrapper[4748]: I0216 15:23:56.796392 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8g2h" event={"ID":"8824dd47-af05-4b9b-85b0-668910ac051b","Type":"ContainerDied","Data":"6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5"} Feb 16 15:23:57 crc kubenswrapper[4748]: I0216 15:23:57.047171 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jj48w"] Feb 16 15:23:57 crc kubenswrapper[4748]: I0216 15:23:57.058008 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jj48w"] Feb 16 15:23:57 crc kubenswrapper[4748]: I0216 15:23:57.809305 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8g2h" event={"ID":"8824dd47-af05-4b9b-85b0-668910ac051b","Type":"ContainerStarted","Data":"c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e"} Feb 16 15:23:57 crc kubenswrapper[4748]: I0216 15:23:57.835278 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s8g2h" podStartSLOduration=2.286515878 podStartE2EDuration="4.835249365s" podCreationTimestamp="2026-02-16 15:23:53 +0000 UTC" firstStartedPulling="2026-02-16 15:23:54.759818914 +0000 UTC m=+1860.451487993" lastFinishedPulling="2026-02-16 15:23:57.308552431 +0000 UTC m=+1863.000221480" observedRunningTime="2026-02-16 15:23:57.830034256 +0000 UTC m=+1863.521703335" watchObservedRunningTime="2026-02-16 15:23:57.835249365 +0000 UTC m=+1863.526918414" Feb 16 15:23:59 crc kubenswrapper[4748]: I0216 15:23:59.009866 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d248cf3a-4788-4b1b-9c6c-9fea87ed20cf" path="/var/lib/kubelet/pods/d248cf3a-4788-4b1b-9c6c-9fea87ed20cf/volumes" Feb 16 15:23:59 crc kubenswrapper[4748]: E0216 15:23:59.997656 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:24:03 crc kubenswrapper[4748]: I0216 15:24:03.069911 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5jqr7"] Feb 16 15:24:03 crc kubenswrapper[4748]: I0216 15:24:03.089266 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5jqr7"] Feb 16 15:24:03 crc kubenswrapper[4748]: I0216 15:24:03.653041 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:24:03 crc kubenswrapper[4748]: I0216 15:24:03.653425 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:24:03 crc kubenswrapper[4748]: I0216 15:24:03.748077 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:24:03 crc kubenswrapper[4748]: I0216 15:24:03.972105 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:24:03 crc kubenswrapper[4748]: I0216 15:24:03.994463 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:24:03 crc kubenswrapper[4748]: E0216 15:24:03.994704 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:24:04 crc kubenswrapper[4748]: I0216 15:24:04.042540 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8g2h"] Feb 16 15:24:05 crc kubenswrapper[4748]: I0216 15:24:05.023421 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b3dfe98-47ed-4b69-b461-f0f9185e4697" path="/var/lib/kubelet/pods/1b3dfe98-47ed-4b69-b461-f0f9185e4697/volumes" Feb 16 15:24:05 crc kubenswrapper[4748]: I0216 15:24:05.916796 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s8g2h" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="registry-server" containerID="cri-o://c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e" gracePeriod=2 Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.529366 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.655089 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stbzq\" (UniqueName: \"kubernetes.io/projected/8824dd47-af05-4b9b-85b0-668910ac051b-kube-api-access-stbzq\") pod \"8824dd47-af05-4b9b-85b0-668910ac051b\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.655191 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-catalog-content\") pod \"8824dd47-af05-4b9b-85b0-668910ac051b\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.655439 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-utilities\") pod \"8824dd47-af05-4b9b-85b0-668910ac051b\" (UID: \"8824dd47-af05-4b9b-85b0-668910ac051b\") " Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.656338 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-utilities" (OuterVolumeSpecName: "utilities") pod "8824dd47-af05-4b9b-85b0-668910ac051b" (UID: "8824dd47-af05-4b9b-85b0-668910ac051b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.668041 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8824dd47-af05-4b9b-85b0-668910ac051b-kube-api-access-stbzq" (OuterVolumeSpecName: "kube-api-access-stbzq") pod "8824dd47-af05-4b9b-85b0-668910ac051b" (UID: "8824dd47-af05-4b9b-85b0-668910ac051b"). InnerVolumeSpecName "kube-api-access-stbzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.686521 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8824dd47-af05-4b9b-85b0-668910ac051b" (UID: "8824dd47-af05-4b9b-85b0-668910ac051b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.758286 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stbzq\" (UniqueName: \"kubernetes.io/projected/8824dd47-af05-4b9b-85b0-668910ac051b-kube-api-access-stbzq\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.758630 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.758798 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8824dd47-af05-4b9b-85b0-668910ac051b-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.931996 4748 generic.go:334] "Generic (PLEG): container finished" podID="8824dd47-af05-4b9b-85b0-668910ac051b" containerID="c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e" exitCode=0 Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.932089 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s8g2h" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.932092 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8g2h" event={"ID":"8824dd47-af05-4b9b-85b0-668910ac051b","Type":"ContainerDied","Data":"c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e"} Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.932679 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s8g2h" event={"ID":"8824dd47-af05-4b9b-85b0-668910ac051b","Type":"ContainerDied","Data":"307b1f0dd5adf65d6a57ecc297477c6eff08210b0325d68a40b5088056300688"} Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.932728 4748 scope.go:117] "RemoveContainer" containerID="c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.973226 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8g2h"] Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.977182 4748 scope.go:117] "RemoveContainer" containerID="6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5" Feb 16 15:24:06 crc kubenswrapper[4748]: I0216 15:24:06.983507 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s8g2h"] Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.009814 4748 scope.go:117] "RemoveContainer" containerID="69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6" Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.013425 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" path="/var/lib/kubelet/pods/8824dd47-af05-4b9b-85b0-668910ac051b/volumes" Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.051943 4748 scope.go:117] "RemoveContainer" containerID="c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e" Feb 16 15:24:07 crc kubenswrapper[4748]: E0216 15:24:07.052646 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e\": container with ID starting with c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e not found: ID does not exist" containerID="c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e" Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.052741 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e"} err="failed to get container status \"c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e\": rpc error: code = NotFound desc = could not find container \"c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e\": container with ID starting with c1168624cf4f37c191e38c848a679c49900349f273edb6ffb136d7b2f362006e not found: ID does not exist" Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.052783 4748 scope.go:117] "RemoveContainer" containerID="6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5" Feb 16 15:24:07 crc kubenswrapper[4748]: E0216 15:24:07.053220 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5\": container with ID starting with 6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5 not found: ID does not exist" containerID="6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5" Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.053256 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5"} err="failed to get container status \"6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5\": rpc error: code = NotFound desc = could not find container \"6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5\": container with ID starting with 6fda2838a0416d9505764a8335b0bfc33f782649aded9e3d0194e4cbeb6659a5 not found: ID does not exist" Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.053284 4748 scope.go:117] "RemoveContainer" containerID="69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6" Feb 16 15:24:07 crc kubenswrapper[4748]: E0216 15:24:07.053936 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6\": container with ID starting with 69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6 not found: ID does not exist" containerID="69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6" Feb 16 15:24:07 crc kubenswrapper[4748]: I0216 15:24:07.053959 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6"} err="failed to get container status \"69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6\": rpc error: code = NotFound desc = could not find container \"69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6\": container with ID starting with 69ce38a7975024eeba4308dc7093c00d1f2767c63182c8ac1dbbb3f0d0fae7f6 not found: ID does not exist" Feb 16 15:24:11 crc kubenswrapper[4748]: I0216 15:24:11.045100 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-msqwg"] Feb 16 15:24:11 crc kubenswrapper[4748]: I0216 15:24:11.058207 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7tc2j"] Feb 16 15:24:11 crc kubenswrapper[4748]: I0216 15:24:11.071028 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7tc2j"] Feb 16 15:24:11 crc kubenswrapper[4748]: I0216 15:24:11.082263 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-msqwg"] Feb 16 15:24:13 crc kubenswrapper[4748]: I0216 15:24:13.013104 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="011c3199-3e59-4794-ab44-de1abe4675a0" path="/var/lib/kubelet/pods/011c3199-3e59-4794-ab44-de1abe4675a0/volumes" Feb 16 15:24:13 crc kubenswrapper[4748]: I0216 15:24:13.014390 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be12071-c948-4de3-8d3e-d21df02dfa91" path="/var/lib/kubelet/pods/7be12071-c948-4de3-8d3e-d21df02dfa91/volumes" Feb 16 15:24:15 crc kubenswrapper[4748]: E0216 15:24:15.134745 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:24:15 crc kubenswrapper[4748]: E0216 15:24:15.135270 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:24:15 crc kubenswrapper[4748]: E0216 15:24:15.135541 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:24:15 crc kubenswrapper[4748]: E0216 15:24:15.136844 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:24:16 crc kubenswrapper[4748]: I0216 15:24:16.995354 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:24:16 crc kubenswrapper[4748]: E0216 15:24:16.996209 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:24:19 crc kubenswrapper[4748]: I0216 15:24:19.674573 4748 scope.go:117] "RemoveContainer" containerID="e479790bbaba4129511d3075304fad8e857f596d77df601c7464e9f4ff714ffe" Feb 16 15:24:19 crc kubenswrapper[4748]: I0216 15:24:19.727362 4748 scope.go:117] "RemoveContainer" containerID="2554e6f0741c90e15959c7f131604f11c0db39ab14d97f8ac458ce8664879fe8" Feb 16 15:24:19 crc kubenswrapper[4748]: I0216 15:24:19.766635 4748 scope.go:117] "RemoveContainer" containerID="fed68d69696b1eb5f344860cd4c49a6f421ff1adc0c21ea6ecaa77c4ac1308da" Feb 16 15:24:19 crc kubenswrapper[4748]: I0216 15:24:19.828093 4748 scope.go:117] "RemoveContainer" containerID="d0ac0b5024c0b33e80125d8427c0f452755535012176a676afd235daa737e3b6" Feb 16 15:24:19 crc kubenswrapper[4748]: I0216 15:24:19.865863 4748 scope.go:117] "RemoveContainer" containerID="eb8a6b756e2112fc2bcaa43573594481ddbe1f7a26a74a3dc3ae3affcd7735dc" Feb 16 15:24:19 crc kubenswrapper[4748]: I0216 15:24:19.927145 4748 scope.go:117] "RemoveContainer" containerID="29119b67c71cde604d33987ff5de6501dc5d0f1b8613233bcb4868750fbc7699" Feb 16 15:24:19 crc kubenswrapper[4748]: I0216 15:24:19.978873 4748 scope.go:117] "RemoveContainer" containerID="d1ea96a2ed17e509c3e6221f827e3118d7ce8000626475d191770f8118cd86ff" Feb 16 15:24:20 crc kubenswrapper[4748]: I0216 15:24:20.004793 4748 scope.go:117] "RemoveContainer" containerID="cb1e4f81e20336caa7c87ffce0542bfa4b87765ea78a307912a97bd190f695da" Feb 16 15:24:20 crc kubenswrapper[4748]: I0216 15:24:20.035039 4748 scope.go:117] "RemoveContainer" containerID="f6f7fd1335c444fd75ce4057f57c3a8da08a32c71f88e1483fef747317a52e62" Feb 16 15:24:20 crc kubenswrapper[4748]: I0216 15:24:20.061566 4748 scope.go:117] "RemoveContainer" containerID="07859cdc596123ee747df27a7e91f2c6e59a6afcd022065c5b2acb0d467172de" Feb 16 15:24:20 crc kubenswrapper[4748]: I0216 15:24:20.106027 4748 scope.go:117] "RemoveContainer" containerID="7bb8057d1ef92bbd9aa707b302d7b8367d26d18ebb6e07b6226bbb1e20734d54" Feb 16 15:24:20 crc kubenswrapper[4748]: I0216 15:24:20.136029 4748 scope.go:117] "RemoveContainer" containerID="01ee078aee0f2616f6cd432d2dd4de5149af707b01591ec5349725b28878f307" Feb 16 15:24:20 crc kubenswrapper[4748]: I0216 15:24:20.159560 4748 scope.go:117] "RemoveContainer" containerID="acc14d26be1aec97d56e5f4958a6785b381e7e517e4c636d671d09ec2848da4a" Feb 16 15:24:20 crc kubenswrapper[4748]: I0216 15:24:20.203814 4748 scope.go:117] "RemoveContainer" containerID="42ebc32d0d54065ae1bdf3381a3da1ce50b5807aafce279f13f8e1002ff71a6c" Feb 16 15:24:24 crc kubenswrapper[4748]: I0216 15:24:24.035638 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-5n8n8"] Feb 16 15:24:24 crc kubenswrapper[4748]: I0216 15:24:24.051169 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-5n8n8"] Feb 16 15:24:25 crc kubenswrapper[4748]: I0216 15:24:25.021453 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c88a82-b5c7-43aa-b216-2fe7bcc6dd71" path="/var/lib/kubelet/pods/59c88a82-b5c7-43aa-b216-2fe7bcc6dd71/volumes" Feb 16 15:24:27 crc kubenswrapper[4748]: E0216 15:24:27.999147 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:24:31 crc kubenswrapper[4748]: I0216 15:24:31.994974 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:24:31 crc kubenswrapper[4748]: E0216 15:24:31.996303 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:24:40 crc kubenswrapper[4748]: E0216 15:24:40.997600 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:24:42 crc kubenswrapper[4748]: I0216 15:24:42.995097 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:24:42 crc kubenswrapper[4748]: E0216 15:24:42.995731 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:24:51 crc kubenswrapper[4748]: E0216 15:24:51.998754 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:24:55 crc kubenswrapper[4748]: I0216 15:24:55.995219 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:24:55 crc kubenswrapper[4748]: E0216 15:24:55.996276 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:25:00 crc kubenswrapper[4748]: I0216 15:25:00.073424 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-6005-account-create-update-rrbfq"] Feb 16 15:25:00 crc kubenswrapper[4748]: I0216 15:25:00.093861 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-6005-account-create-update-rrbfq"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.078523 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48a08b5e-7769-4af5-a383-36f041c2fc9d" path="/var/lib/kubelet/pods/48a08b5e-7769-4af5-a383-36f041c2fc9d/volumes" Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.082985 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-qqx9c"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.104557 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-qqx9c"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.115482 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-6r2l8"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.126838 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-6r2l8"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.139648 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-5d6f-account-create-update-zt5xw"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.152247 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-ndnqn"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.163301 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-4260-account-create-update-bknw6"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.175942 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-5d6f-account-create-update-zt5xw"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.190477 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-ndnqn"] Feb 16 15:25:01 crc kubenswrapper[4748]: I0216 15:25:01.201994 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-4260-account-create-update-bknw6"] Feb 16 15:25:03 crc kubenswrapper[4748]: I0216 15:25:03.012937 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64534f44-ea87-4c15-a1a6-a9c2e8b799dd" path="/var/lib/kubelet/pods/64534f44-ea87-4c15-a1a6-a9c2e8b799dd/volumes" Feb 16 15:25:03 crc kubenswrapper[4748]: I0216 15:25:03.014480 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7846c19e-8dd8-4362-8324-3f23e257f4f4" path="/var/lib/kubelet/pods/7846c19e-8dd8-4362-8324-3f23e257f4f4/volumes" Feb 16 15:25:03 crc kubenswrapper[4748]: I0216 15:25:03.016303 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79ac09ea-96de-4c66-b3ba-f41bf6993859" path="/var/lib/kubelet/pods/79ac09ea-96de-4c66-b3ba-f41bf6993859/volumes" Feb 16 15:25:03 crc kubenswrapper[4748]: I0216 15:25:03.017697 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0deb4ac-44ab-4427-a6d8-ec4dcc55981e" path="/var/lib/kubelet/pods/b0deb4ac-44ab-4427-a6d8-ec4dcc55981e/volumes" Feb 16 15:25:03 crc kubenswrapper[4748]: I0216 15:25:03.020077 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3ec9879-a2a4-429f-af70-796e8246fce9" path="/var/lib/kubelet/pods/c3ec9879-a2a4-429f-af70-796e8246fce9/volumes" Feb 16 15:25:05 crc kubenswrapper[4748]: E0216 15:25:05.004941 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:25:06 crc kubenswrapper[4748]: I0216 15:25:06.994239 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:25:06 crc kubenswrapper[4748]: E0216 15:25:06.994682 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:25:18 crc kubenswrapper[4748]: E0216 15:25:17.999153 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.503610 4748 scope.go:117] "RemoveContainer" containerID="15eb3630e4adaeb16c0460571c800ff6be2b8ffeb350c9374ab914e1ba540153" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.548953 4748 scope.go:117] "RemoveContainer" containerID="3d9c470293808fd7b20c8198d2d095cb9890e2e281b2fc3959f351c759a2cdbf" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.606398 4748 scope.go:117] "RemoveContainer" containerID="f72f3e2d5b75c85c7bf73dfee935e4c3b568320ee5e7422bccb9713d80075ed9" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.659350 4748 scope.go:117] "RemoveContainer" containerID="f78504620b513404600edd5aa1ccbb5d5305103e28ba681f97bb5f8da5c003dd" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.704476 4748 scope.go:117] "RemoveContainer" containerID="8dee0793e46a1c6152475b300e16da3e79fbb60f1e483394272e1eea5eb03bbe" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.751534 4748 scope.go:117] "RemoveContainer" containerID="aca736f26971ab9a07a672df46a4d4e5cb8e87a66ab2f4d83540069f8fb0b68f" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.806113 4748 scope.go:117] "RemoveContainer" containerID="0e833dcb4f3a0ebaaab025f6b0c1ad587e826c822f10215771faf1f3b4550cc6" Feb 16 15:25:20 crc kubenswrapper[4748]: I0216 15:25:20.996136 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:25:20 crc kubenswrapper[4748]: E0216 15:25:20.996546 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:25:29 crc kubenswrapper[4748]: I0216 15:25:29.104155 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g84x5"] Feb 16 15:25:29 crc kubenswrapper[4748]: I0216 15:25:29.119009 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-g84x5"] Feb 16 15:25:29 crc kubenswrapper[4748]: E0216 15:25:29.998665 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:25:31 crc kubenswrapper[4748]: I0216 15:25:31.009284 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993b883d-8949-4e81-87a0-efed48d8dc55" path="/var/lib/kubelet/pods/993b883d-8949-4e81-87a0-efed48d8dc55/volumes" Feb 16 15:25:35 crc kubenswrapper[4748]: I0216 15:25:35.994631 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:25:36 crc kubenswrapper[4748]: I0216 15:25:36.303793 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"c93f2141003145d5fc8af431803d3dded9dab52304d897b73534788df6935054"} Feb 16 15:25:41 crc kubenswrapper[4748]: E0216 15:25:41.000563 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:25:53 crc kubenswrapper[4748]: E0216 15:25:52.999554 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:25:55 crc kubenswrapper[4748]: I0216 15:25:55.038067 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-92xs7"] Feb 16 15:25:55 crc kubenswrapper[4748]: I0216 15:25:55.052018 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-92xs7"] Feb 16 15:25:57 crc kubenswrapper[4748]: I0216 15:25:57.011857 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5" path="/var/lib/kubelet/pods/b1a61ae2-e475-4ca4-9ae5-73ffe05db0b5/volumes" Feb 16 15:25:57 crc kubenswrapper[4748]: I0216 15:25:57.040543 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dmlnl"] Feb 16 15:25:57 crc kubenswrapper[4748]: I0216 15:25:57.053022 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-dmlnl"] Feb 16 15:25:59 crc kubenswrapper[4748]: I0216 15:25:59.007198 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df36d33b-ccf3-49af-9696-058097245d94" path="/var/lib/kubelet/pods/df36d33b-ccf3-49af-9696-058097245d94/volumes" Feb 16 15:26:05 crc kubenswrapper[4748]: E0216 15:26:05.013407 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.093020 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6hgnk"] Feb 16 15:26:14 crc kubenswrapper[4748]: E0216 15:26:14.094928 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="extract-content" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.094965 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="extract-content" Feb 16 15:26:14 crc kubenswrapper[4748]: E0216 15:26:14.095003 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="extract-utilities" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.095022 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="extract-utilities" Feb 16 15:26:14 crc kubenswrapper[4748]: E0216 15:26:14.095096 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="registry-server" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.095114 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="registry-server" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.095579 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8824dd47-af05-4b9b-85b0-668910ac051b" containerName="registry-server" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.098913 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.107310 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hgnk"] Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.170980 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-utilities\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.171358 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9prp\" (UniqueName: \"kubernetes.io/projected/0700264c-e4fe-4027-8b3f-2cb15f65cdce-kube-api-access-z9prp\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.171473 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-catalog-content\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.273162 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-catalog-content\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.273269 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-utilities\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.273363 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9prp\" (UniqueName: \"kubernetes.io/projected/0700264c-e4fe-4027-8b3f-2cb15f65cdce-kube-api-access-z9prp\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.274059 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-catalog-content\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.274263 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-utilities\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.292673 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9prp\" (UniqueName: \"kubernetes.io/projected/0700264c-e4fe-4027-8b3f-2cb15f65cdce-kube-api-access-z9prp\") pod \"redhat-operators-6hgnk\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.470453 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:14 crc kubenswrapper[4748]: I0216 15:26:14.963271 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6hgnk"] Feb 16 15:26:15 crc kubenswrapper[4748]: I0216 15:26:15.709588 4748 generic.go:334] "Generic (PLEG): container finished" podID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerID="9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e" exitCode=0 Feb 16 15:26:15 crc kubenswrapper[4748]: I0216 15:26:15.709786 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hgnk" event={"ID":"0700264c-e4fe-4027-8b3f-2cb15f65cdce","Type":"ContainerDied","Data":"9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e"} Feb 16 15:26:15 crc kubenswrapper[4748]: I0216 15:26:15.709905 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hgnk" event={"ID":"0700264c-e4fe-4027-8b3f-2cb15f65cdce","Type":"ContainerStarted","Data":"a150c85bd1869193b7262ec1e3532709c28d6f387ae4aa42423d86017ffe615f"} Feb 16 15:26:17 crc kubenswrapper[4748]: I0216 15:26:17.729552 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hgnk" event={"ID":"0700264c-e4fe-4027-8b3f-2cb15f65cdce","Type":"ContainerStarted","Data":"838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44"} Feb 16 15:26:18 crc kubenswrapper[4748]: E0216 15:26:18.997649 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:26:19 crc kubenswrapper[4748]: I0216 15:26:19.754121 4748 generic.go:334] "Generic (PLEG): container finished" podID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerID="838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44" exitCode=0 Feb 16 15:26:19 crc kubenswrapper[4748]: I0216 15:26:19.754224 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hgnk" event={"ID":"0700264c-e4fe-4027-8b3f-2cb15f65cdce","Type":"ContainerDied","Data":"838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44"} Feb 16 15:26:21 crc kubenswrapper[4748]: I0216 15:26:21.049510 4748 scope.go:117] "RemoveContainer" containerID="9aeba28c3a78c82fa72969d5da5e3579a507fd5eae8d1391b624eb6d510f9ed3" Feb 16 15:26:21 crc kubenswrapper[4748]: I0216 15:26:21.105643 4748 scope.go:117] "RemoveContainer" containerID="48255c30433af25753936841b402a13143179682ddc5aa13ffddf1996a4d7b3d" Feb 16 15:26:21 crc kubenswrapper[4748]: I0216 15:26:21.196242 4748 scope.go:117] "RemoveContainer" containerID="8f6a91d4525671bc185440cfb0767e0764e455ec99d2b31180e37db1f646fa16" Feb 16 15:26:21 crc kubenswrapper[4748]: I0216 15:26:21.779908 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hgnk" event={"ID":"0700264c-e4fe-4027-8b3f-2cb15f65cdce","Type":"ContainerStarted","Data":"4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042"} Feb 16 15:26:21 crc kubenswrapper[4748]: I0216 15:26:21.802827 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6hgnk" podStartSLOduration=2.104975475 podStartE2EDuration="7.802807651s" podCreationTimestamp="2026-02-16 15:26:14 +0000 UTC" firstStartedPulling="2026-02-16 15:26:15.712177421 +0000 UTC m=+2001.403846470" lastFinishedPulling="2026-02-16 15:26:21.410009607 +0000 UTC m=+2007.101678646" observedRunningTime="2026-02-16 15:26:21.797045099 +0000 UTC m=+2007.488714158" watchObservedRunningTime="2026-02-16 15:26:21.802807651 +0000 UTC m=+2007.494476690" Feb 16 15:26:24 crc kubenswrapper[4748]: I0216 15:26:24.471148 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:24 crc kubenswrapper[4748]: I0216 15:26:24.471432 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:25 crc kubenswrapper[4748]: I0216 15:26:25.528188 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6hgnk" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="registry-server" probeResult="failure" output=< Feb 16 15:26:25 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:26:25 crc kubenswrapper[4748]: > Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.697976 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c8zgx"] Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.700954 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.713819 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8zgx"] Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.855926 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-catalog-content\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.856112 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-utilities\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.856167 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqk69\" (UniqueName: \"kubernetes.io/projected/f41900d2-7d7d-4192-a310-b52b01143f5f-kube-api-access-kqk69\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.958434 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-catalog-content\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.958524 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-utilities\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.958553 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqk69\" (UniqueName: \"kubernetes.io/projected/f41900d2-7d7d-4192-a310-b52b01143f5f-kube-api-access-kqk69\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.959074 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-utilities\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.959071 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-catalog-content\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:26 crc kubenswrapper[4748]: I0216 15:26:26.978801 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqk69\" (UniqueName: \"kubernetes.io/projected/f41900d2-7d7d-4192-a310-b52b01143f5f-kube-api-access-kqk69\") pod \"community-operators-c8zgx\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:27 crc kubenswrapper[4748]: I0216 15:26:27.066968 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:27 crc kubenswrapper[4748]: I0216 15:26:27.769925 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c8zgx"] Feb 16 15:26:27 crc kubenswrapper[4748]: I0216 15:26:27.838698 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8zgx" event={"ID":"f41900d2-7d7d-4192-a310-b52b01143f5f","Type":"ContainerStarted","Data":"68188bad3fe6e4a58d1320f897809f5751a6a2b0eed2c7c81f1be5d8ef0d5bc5"} Feb 16 15:26:28 crc kubenswrapper[4748]: I0216 15:26:28.853690 4748 generic.go:334] "Generic (PLEG): container finished" podID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerID="4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851" exitCode=0 Feb 16 15:26:28 crc kubenswrapper[4748]: I0216 15:26:28.853805 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8zgx" event={"ID":"f41900d2-7d7d-4192-a310-b52b01143f5f","Type":"ContainerDied","Data":"4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851"} Feb 16 15:26:29 crc kubenswrapper[4748]: I0216 15:26:29.863722 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8zgx" event={"ID":"f41900d2-7d7d-4192-a310-b52b01143f5f","Type":"ContainerStarted","Data":"4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068"} Feb 16 15:26:30 crc kubenswrapper[4748]: I0216 15:26:30.879132 4748 generic.go:334] "Generic (PLEG): container finished" podID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerID="4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068" exitCode=0 Feb 16 15:26:30 crc kubenswrapper[4748]: I0216 15:26:30.879181 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8zgx" event={"ID":"f41900d2-7d7d-4192-a310-b52b01143f5f","Type":"ContainerDied","Data":"4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068"} Feb 16 15:26:31 crc kubenswrapper[4748]: I0216 15:26:31.892681 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8zgx" event={"ID":"f41900d2-7d7d-4192-a310-b52b01143f5f","Type":"ContainerStarted","Data":"6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0"} Feb 16 15:26:31 crc kubenswrapper[4748]: I0216 15:26:31.919617 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c8zgx" podStartSLOduration=3.376101078 podStartE2EDuration="5.919597938s" podCreationTimestamp="2026-02-16 15:26:26 +0000 UTC" firstStartedPulling="2026-02-16 15:26:28.856437957 +0000 UTC m=+2014.548107046" lastFinishedPulling="2026-02-16 15:26:31.399934867 +0000 UTC m=+2017.091603906" observedRunningTime="2026-02-16 15:26:31.911484628 +0000 UTC m=+2017.603153697" watchObservedRunningTime="2026-02-16 15:26:31.919597938 +0000 UTC m=+2017.611266987" Feb 16 15:26:33 crc kubenswrapper[4748]: E0216 15:26:33.003072 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:26:35 crc kubenswrapper[4748]: I0216 15:26:35.541362 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6hgnk" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="registry-server" probeResult="failure" output=< Feb 16 15:26:35 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:26:35 crc kubenswrapper[4748]: > Feb 16 15:26:37 crc kubenswrapper[4748]: I0216 15:26:37.067758 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:37 crc kubenswrapper[4748]: I0216 15:26:37.068292 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:37 crc kubenswrapper[4748]: I0216 15:26:37.114929 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:38 crc kubenswrapper[4748]: I0216 15:26:38.016125 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:38 crc kubenswrapper[4748]: I0216 15:26:38.077805 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8zgx"] Feb 16 15:26:39 crc kubenswrapper[4748]: I0216 15:26:39.993826 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c8zgx" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="registry-server" containerID="cri-o://6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0" gracePeriod=2 Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.491245 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.570929 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-utilities\") pod \"f41900d2-7d7d-4192-a310-b52b01143f5f\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.571118 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqk69\" (UniqueName: \"kubernetes.io/projected/f41900d2-7d7d-4192-a310-b52b01143f5f-kube-api-access-kqk69\") pod \"f41900d2-7d7d-4192-a310-b52b01143f5f\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.571335 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-catalog-content\") pod \"f41900d2-7d7d-4192-a310-b52b01143f5f\" (UID: \"f41900d2-7d7d-4192-a310-b52b01143f5f\") " Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.571770 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-utilities" (OuterVolumeSpecName: "utilities") pod "f41900d2-7d7d-4192-a310-b52b01143f5f" (UID: "f41900d2-7d7d-4192-a310-b52b01143f5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.590054 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f41900d2-7d7d-4192-a310-b52b01143f5f-kube-api-access-kqk69" (OuterVolumeSpecName: "kube-api-access-kqk69") pod "f41900d2-7d7d-4192-a310-b52b01143f5f" (UID: "f41900d2-7d7d-4192-a310-b52b01143f5f"). InnerVolumeSpecName "kube-api-access-kqk69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.618483 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f41900d2-7d7d-4192-a310-b52b01143f5f" (UID: "f41900d2-7d7d-4192-a310-b52b01143f5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.673467 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqk69\" (UniqueName: \"kubernetes.io/projected/f41900d2-7d7d-4192-a310-b52b01143f5f-kube-api-access-kqk69\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.673507 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:40 crc kubenswrapper[4748]: I0216 15:26:40.673521 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f41900d2-7d7d-4192-a310-b52b01143f5f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.005088 4748 generic.go:334] "Generic (PLEG): container finished" podID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerID="6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0" exitCode=0 Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.005176 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c8zgx" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.007169 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8zgx" event={"ID":"f41900d2-7d7d-4192-a310-b52b01143f5f","Type":"ContainerDied","Data":"6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0"} Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.007243 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c8zgx" event={"ID":"f41900d2-7d7d-4192-a310-b52b01143f5f","Type":"ContainerDied","Data":"68188bad3fe6e4a58d1320f897809f5751a6a2b0eed2c7c81f1be5d8ef0d5bc5"} Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.007274 4748 scope.go:117] "RemoveContainer" containerID="6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.038524 4748 scope.go:117] "RemoveContainer" containerID="4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.052993 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c8zgx"] Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.067729 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-kpzzb"] Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.071377 4748 scope.go:117] "RemoveContainer" containerID="4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.077345 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-kpzzb"] Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.085067 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c8zgx"] Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.128415 4748 scope.go:117] "RemoveContainer" containerID="6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0" Feb 16 15:26:41 crc kubenswrapper[4748]: E0216 15:26:41.134256 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0\": container with ID starting with 6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0 not found: ID does not exist" containerID="6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.134318 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0"} err="failed to get container status \"6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0\": rpc error: code = NotFound desc = could not find container \"6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0\": container with ID starting with 6aad542677a8b80865e0786e04746bd2c54edd9f517a0543cfa735843aa24ed0 not found: ID does not exist" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.134355 4748 scope.go:117] "RemoveContainer" containerID="4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068" Feb 16 15:26:41 crc kubenswrapper[4748]: E0216 15:26:41.134814 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068\": container with ID starting with 4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068 not found: ID does not exist" containerID="4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.134840 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068"} err="failed to get container status \"4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068\": rpc error: code = NotFound desc = could not find container \"4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068\": container with ID starting with 4fd55a463abbd2a1217c2c0a743672f8cabb14772a5261f2f9a5d59a5d226068 not found: ID does not exist" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.134856 4748 scope.go:117] "RemoveContainer" containerID="4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851" Feb 16 15:26:41 crc kubenswrapper[4748]: E0216 15:26:41.135128 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851\": container with ID starting with 4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851 not found: ID does not exist" containerID="4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851" Feb 16 15:26:41 crc kubenswrapper[4748]: I0216 15:26:41.135156 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851"} err="failed to get container status \"4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851\": rpc error: code = NotFound desc = could not find container \"4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851\": container with ID starting with 4cb5f2bbc5d934a3f0d85bddd63cadee973a0bcbfbf4b0f387da9054a1706851 not found: ID does not exist" Feb 16 15:26:43 crc kubenswrapper[4748]: I0216 15:26:43.007182 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5e00d08-e68a-479e-961b-38bc4e12b351" path="/var/lib/kubelet/pods/d5e00d08-e68a-479e-961b-38bc4e12b351/volumes" Feb 16 15:26:43 crc kubenswrapper[4748]: I0216 15:26:43.008347 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" path="/var/lib/kubelet/pods/f41900d2-7d7d-4192-a310-b52b01143f5f/volumes" Feb 16 15:26:45 crc kubenswrapper[4748]: I0216 15:26:45.586832 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6hgnk" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="registry-server" probeResult="failure" output=< Feb 16 15:26:45 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:26:45 crc kubenswrapper[4748]: > Feb 16 15:26:47 crc kubenswrapper[4748]: E0216 15:26:47.996068 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:26:54 crc kubenswrapper[4748]: I0216 15:26:54.526267 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:54 crc kubenswrapper[4748]: I0216 15:26:54.576468 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:54 crc kubenswrapper[4748]: I0216 15:26:54.781680 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hgnk"] Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.176708 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6hgnk" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="registry-server" containerID="cri-o://4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042" gracePeriod=2 Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.733658 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.849655 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-utilities\") pod \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.849902 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9prp\" (UniqueName: \"kubernetes.io/projected/0700264c-e4fe-4027-8b3f-2cb15f65cdce-kube-api-access-z9prp\") pod \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.849978 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-catalog-content\") pod \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\" (UID: \"0700264c-e4fe-4027-8b3f-2cb15f65cdce\") " Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.851393 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-utilities" (OuterVolumeSpecName: "utilities") pod "0700264c-e4fe-4027-8b3f-2cb15f65cdce" (UID: "0700264c-e4fe-4027-8b3f-2cb15f65cdce"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.863767 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0700264c-e4fe-4027-8b3f-2cb15f65cdce-kube-api-access-z9prp" (OuterVolumeSpecName: "kube-api-access-z9prp") pod "0700264c-e4fe-4027-8b3f-2cb15f65cdce" (UID: "0700264c-e4fe-4027-8b3f-2cb15f65cdce"). InnerVolumeSpecName "kube-api-access-z9prp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.952286 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9prp\" (UniqueName: \"kubernetes.io/projected/0700264c-e4fe-4027-8b3f-2cb15f65cdce-kube-api-access-z9prp\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.952325 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:56 crc kubenswrapper[4748]: I0216 15:26:56.966326 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0700264c-e4fe-4027-8b3f-2cb15f65cdce" (UID: "0700264c-e4fe-4027-8b3f-2cb15f65cdce"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.053899 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0700264c-e4fe-4027-8b3f-2cb15f65cdce-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.195187 4748 generic.go:334] "Generic (PLEG): container finished" podID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerID="4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042" exitCode=0 Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.195241 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hgnk" event={"ID":"0700264c-e4fe-4027-8b3f-2cb15f65cdce","Type":"ContainerDied","Data":"4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042"} Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.195273 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6hgnk" event={"ID":"0700264c-e4fe-4027-8b3f-2cb15f65cdce","Type":"ContainerDied","Data":"a150c85bd1869193b7262ec1e3532709c28d6f387ae4aa42423d86017ffe615f"} Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.195294 4748 scope.go:117] "RemoveContainer" containerID="4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.195296 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6hgnk" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.225559 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6hgnk"] Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.229709 4748 scope.go:117] "RemoveContainer" containerID="838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.239487 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6hgnk"] Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.267867 4748 scope.go:117] "RemoveContainer" containerID="9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.347107 4748 scope.go:117] "RemoveContainer" containerID="4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042" Feb 16 15:26:57 crc kubenswrapper[4748]: E0216 15:26:57.350031 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042\": container with ID starting with 4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042 not found: ID does not exist" containerID="4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.350096 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042"} err="failed to get container status \"4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042\": rpc error: code = NotFound desc = could not find container \"4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042\": container with ID starting with 4f1b91280b2d80597d541137750f5a460d8c84fbe10095e77ba9df728f24b042 not found: ID does not exist" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.350141 4748 scope.go:117] "RemoveContainer" containerID="838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44" Feb 16 15:26:57 crc kubenswrapper[4748]: E0216 15:26:57.350587 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44\": container with ID starting with 838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44 not found: ID does not exist" containerID="838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.350631 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44"} err="failed to get container status \"838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44\": rpc error: code = NotFound desc = could not find container \"838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44\": container with ID starting with 838777f3722b34b0154c65568ed63e3910f8cd1c995de9f415fe4e7a27731d44 not found: ID does not exist" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.350659 4748 scope.go:117] "RemoveContainer" containerID="9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e" Feb 16 15:26:57 crc kubenswrapper[4748]: E0216 15:26:57.352266 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e\": container with ID starting with 9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e not found: ID does not exist" containerID="9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e" Feb 16 15:26:57 crc kubenswrapper[4748]: I0216 15:26:57.352484 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e"} err="failed to get container status \"9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e\": rpc error: code = NotFound desc = could not find container \"9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e\": container with ID starting with 9ec037535e847eaa19717224160d1d524bf9772f46ee386352e3cb0e728d3a4e not found: ID does not exist" Feb 16 15:26:59 crc kubenswrapper[4748]: I0216 15:26:59.029006 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" path="/var/lib/kubelet/pods/0700264c-e4fe-4027-8b3f-2cb15f65cdce/volumes" Feb 16 15:27:03 crc kubenswrapper[4748]: E0216 15:27:03.000208 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:27:11 crc kubenswrapper[4748]: I0216 15:27:11.021776 4748 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","podf41900d2-7d7d-4192-a310-b52b01143f5f"] err="unable to destroy cgroup paths for cgroup [kubepods burstable podf41900d2-7d7d-4192-a310-b52b01143f5f] : Timed out while waiting for systemd to remove kubepods-burstable-podf41900d2_7d7d_4192_a310_b52b01143f5f.slice" Feb 16 15:27:17 crc kubenswrapper[4748]: E0216 15:27:17.996166 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:27:21 crc kubenswrapper[4748]: I0216 15:27:21.312702 4748 scope.go:117] "RemoveContainer" containerID="d53711d99957759a4e5a8ebb3530b2da6e6eeb4d48c822ace2e842d1fa401f2b" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.012949 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8jfzr"] Feb 16 15:27:23 crc kubenswrapper[4748]: E0216 15:27:23.013833 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="extract-utilities" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.013853 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="extract-utilities" Feb 16 15:27:23 crc kubenswrapper[4748]: E0216 15:27:23.013885 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="extract-content" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.013896 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="extract-content" Feb 16 15:27:23 crc kubenswrapper[4748]: E0216 15:27:23.013917 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="extract-utilities" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.013927 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="extract-utilities" Feb 16 15:27:23 crc kubenswrapper[4748]: E0216 15:27:23.013943 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="extract-content" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.013954 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="extract-content" Feb 16 15:27:23 crc kubenswrapper[4748]: E0216 15:27:23.013971 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="registry-server" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.013981 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="registry-server" Feb 16 15:27:23 crc kubenswrapper[4748]: E0216 15:27:23.014001 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="registry-server" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.014011 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="registry-server" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.014337 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f41900d2-7d7d-4192-a310-b52b01143f5f" containerName="registry-server" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.014368 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="0700264c-e4fe-4027-8b3f-2cb15f65cdce" containerName="registry-server" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.016776 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.042985 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jfzr"] Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.110359 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-utilities\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.110698 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-catalog-content\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.110846 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5t62\" (UniqueName: \"kubernetes.io/projected/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-kube-api-access-n5t62\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.213168 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-utilities\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.213296 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-catalog-content\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.213347 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5t62\" (UniqueName: \"kubernetes.io/projected/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-kube-api-access-n5t62\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.214207 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-utilities\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.214527 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-catalog-content\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.239284 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5t62\" (UniqueName: \"kubernetes.io/projected/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-kube-api-access-n5t62\") pod \"certified-operators-8jfzr\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.369389 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:23 crc kubenswrapper[4748]: I0216 15:27:23.919285 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8jfzr"] Feb 16 15:27:24 crc kubenswrapper[4748]: I0216 15:27:24.493576 4748 generic.go:334] "Generic (PLEG): container finished" podID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerID="dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d" exitCode=0 Feb 16 15:27:24 crc kubenswrapper[4748]: I0216 15:27:24.493652 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfzr" event={"ID":"8ba0b97a-9999-4e68-9ffe-18ac69bcf666","Type":"ContainerDied","Data":"dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d"} Feb 16 15:27:24 crc kubenswrapper[4748]: I0216 15:27:24.493752 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfzr" event={"ID":"8ba0b97a-9999-4e68-9ffe-18ac69bcf666","Type":"ContainerStarted","Data":"83334d107a0ad6e3b39e9f8e7f0b44f9af66186279eb7709eeb56c3fc142348c"} Feb 16 15:27:25 crc kubenswrapper[4748]: I0216 15:27:25.507199 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfzr" event={"ID":"8ba0b97a-9999-4e68-9ffe-18ac69bcf666","Type":"ContainerStarted","Data":"3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149"} Feb 16 15:27:26 crc kubenswrapper[4748]: I0216 15:27:26.516484 4748 generic.go:334] "Generic (PLEG): container finished" podID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerID="3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149" exitCode=0 Feb 16 15:27:26 crc kubenswrapper[4748]: I0216 15:27:26.516616 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfzr" event={"ID":"8ba0b97a-9999-4e68-9ffe-18ac69bcf666","Type":"ContainerDied","Data":"3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149"} Feb 16 15:27:27 crc kubenswrapper[4748]: I0216 15:27:27.525962 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfzr" event={"ID":"8ba0b97a-9999-4e68-9ffe-18ac69bcf666","Type":"ContainerStarted","Data":"c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654"} Feb 16 15:27:27 crc kubenswrapper[4748]: I0216 15:27:27.550317 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8jfzr" podStartSLOduration=3.112877193 podStartE2EDuration="5.55029352s" podCreationTimestamp="2026-02-16 15:27:22 +0000 UTC" firstStartedPulling="2026-02-16 15:27:24.496826178 +0000 UTC m=+2070.188495227" lastFinishedPulling="2026-02-16 15:27:26.934242505 +0000 UTC m=+2072.625911554" observedRunningTime="2026-02-16 15:27:27.53977744 +0000 UTC m=+2073.231446479" watchObservedRunningTime="2026-02-16 15:27:27.55029352 +0000 UTC m=+2073.241962569" Feb 16 15:27:31 crc kubenswrapper[4748]: E0216 15:27:31.997284 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:27:33 crc kubenswrapper[4748]: I0216 15:27:33.370475 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:33 crc kubenswrapper[4748]: I0216 15:27:33.371034 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:33 crc kubenswrapper[4748]: I0216 15:27:33.493123 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:33 crc kubenswrapper[4748]: I0216 15:27:33.647729 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:33 crc kubenswrapper[4748]: I0216 15:27:33.728306 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jfzr"] Feb 16 15:27:35 crc kubenswrapper[4748]: I0216 15:27:35.615231 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8jfzr" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="registry-server" containerID="cri-o://c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654" gracePeriod=2 Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.206785 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.327005 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-utilities\") pod \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.327115 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5t62\" (UniqueName: \"kubernetes.io/projected/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-kube-api-access-n5t62\") pod \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.327186 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-catalog-content\") pod \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\" (UID: \"8ba0b97a-9999-4e68-9ffe-18ac69bcf666\") " Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.329045 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-utilities" (OuterVolumeSpecName: "utilities") pod "8ba0b97a-9999-4e68-9ffe-18ac69bcf666" (UID: "8ba0b97a-9999-4e68-9ffe-18ac69bcf666"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.333988 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-kube-api-access-n5t62" (OuterVolumeSpecName: "kube-api-access-n5t62") pod "8ba0b97a-9999-4e68-9ffe-18ac69bcf666" (UID: "8ba0b97a-9999-4e68-9ffe-18ac69bcf666"). InnerVolumeSpecName "kube-api-access-n5t62". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.408346 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ba0b97a-9999-4e68-9ffe-18ac69bcf666" (UID: "8ba0b97a-9999-4e68-9ffe-18ac69bcf666"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.429808 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.429857 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.429877 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5t62\" (UniqueName: \"kubernetes.io/projected/8ba0b97a-9999-4e68-9ffe-18ac69bcf666-kube-api-access-n5t62\") on node \"crc\" DevicePath \"\"" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.641313 4748 generic.go:334] "Generic (PLEG): container finished" podID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerID="c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654" exitCode=0 Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.641358 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfzr" event={"ID":"8ba0b97a-9999-4e68-9ffe-18ac69bcf666","Type":"ContainerDied","Data":"c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654"} Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.641388 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8jfzr" event={"ID":"8ba0b97a-9999-4e68-9ffe-18ac69bcf666","Type":"ContainerDied","Data":"83334d107a0ad6e3b39e9f8e7f0b44f9af66186279eb7709eeb56c3fc142348c"} Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.641404 4748 scope.go:117] "RemoveContainer" containerID="c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.641469 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8jfzr" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.679202 4748 scope.go:117] "RemoveContainer" containerID="3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.714953 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8jfzr"] Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.726805 4748 scope.go:117] "RemoveContainer" containerID="dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.729262 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8jfzr"] Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.818357 4748 scope.go:117] "RemoveContainer" containerID="c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654" Feb 16 15:27:36 crc kubenswrapper[4748]: E0216 15:27:36.818877 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654\": container with ID starting with c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654 not found: ID does not exist" containerID="c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.818908 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654"} err="failed to get container status \"c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654\": rpc error: code = NotFound desc = could not find container \"c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654\": container with ID starting with c2e07329742c0cc691a5df829d43b5fcf793aaf387a6d9fb790cfa82e3556654 not found: ID does not exist" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.818929 4748 scope.go:117] "RemoveContainer" containerID="3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149" Feb 16 15:27:36 crc kubenswrapper[4748]: E0216 15:27:36.819253 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149\": container with ID starting with 3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149 not found: ID does not exist" containerID="3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.819285 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149"} err="failed to get container status \"3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149\": rpc error: code = NotFound desc = could not find container \"3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149\": container with ID starting with 3aebd5c652decc10af739bbfc81f2bb9eb5c949ce2ac9b39652f924a9391f149 not found: ID does not exist" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.819309 4748 scope.go:117] "RemoveContainer" containerID="dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d" Feb 16 15:27:36 crc kubenswrapper[4748]: E0216 15:27:36.819571 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d\": container with ID starting with dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d not found: ID does not exist" containerID="dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d" Feb 16 15:27:36 crc kubenswrapper[4748]: I0216 15:27:36.819603 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d"} err="failed to get container status \"dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d\": rpc error: code = NotFound desc = could not find container \"dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d\": container with ID starting with dda5a2ffa1d98c9544ea3d001486911d34e5318a36c120a7d398358e802fa38d not found: ID does not exist" Feb 16 15:27:37 crc kubenswrapper[4748]: I0216 15:27:37.004740 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" path="/var/lib/kubelet/pods/8ba0b97a-9999-4e68-9ffe-18ac69bcf666/volumes" Feb 16 15:27:46 crc kubenswrapper[4748]: E0216 15:27:46.996253 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:28:01 crc kubenswrapper[4748]: E0216 15:28:01.005959 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:28:04 crc kubenswrapper[4748]: I0216 15:28:04.729355 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:28:04 crc kubenswrapper[4748]: I0216 15:28:04.729674 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:28:11 crc kubenswrapper[4748]: E0216 15:28:11.997173 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:28:23 crc kubenswrapper[4748]: E0216 15:28:23.998273 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:28:34 crc kubenswrapper[4748]: I0216 15:28:34.730036 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:28:34 crc kubenswrapper[4748]: I0216 15:28:34.730698 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:28:38 crc kubenswrapper[4748]: E0216 15:28:38.998098 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:28:53 crc kubenswrapper[4748]: E0216 15:28:53.998641 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:29:04 crc kubenswrapper[4748]: I0216 15:29:04.729824 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:29:04 crc kubenswrapper[4748]: I0216 15:29:04.730556 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:29:04 crc kubenswrapper[4748]: I0216 15:29:04.730625 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:29:04 crc kubenswrapper[4748]: I0216 15:29:04.731764 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c93f2141003145d5fc8af431803d3dded9dab52304d897b73534788df6935054"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:29:04 crc kubenswrapper[4748]: I0216 15:29:04.731869 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://c93f2141003145d5fc8af431803d3dded9dab52304d897b73534788df6935054" gracePeriod=600 Feb 16 15:29:05 crc kubenswrapper[4748]: E0216 15:29:05.003496 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:29:05 crc kubenswrapper[4748]: I0216 15:29:05.682510 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="c93f2141003145d5fc8af431803d3dded9dab52304d897b73534788df6935054" exitCode=0 Feb 16 15:29:05 crc kubenswrapper[4748]: I0216 15:29:05.682581 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"c93f2141003145d5fc8af431803d3dded9dab52304d897b73534788df6935054"} Feb 16 15:29:05 crc kubenswrapper[4748]: I0216 15:29:05.682834 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98"} Feb 16 15:29:05 crc kubenswrapper[4748]: I0216 15:29:05.682853 4748 scope.go:117] "RemoveContainer" containerID="25f8884a6429ddc10ce15e9206eb7dfc5cec828fefba151073e233840f394ed2" Feb 16 15:29:20 crc kubenswrapper[4748]: I0216 15:29:19.999342 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:29:20 crc kubenswrapper[4748]: E0216 15:29:20.109875 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:29:20 crc kubenswrapper[4748]: E0216 15:29:20.109926 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:29:20 crc kubenswrapper[4748]: E0216 15:29:20.110045 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:29:20 crc kubenswrapper[4748]: E0216 15:29:20.111318 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:29:35 crc kubenswrapper[4748]: E0216 15:29:35.004469 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:29:46 crc kubenswrapper[4748]: E0216 15:29:45.998628 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:29:59 crc kubenswrapper[4748]: E0216 15:29:59.997674 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.168911 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx"] Feb 16 15:30:00 crc kubenswrapper[4748]: E0216 15:30:00.169683 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.169708 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4748]: E0216 15:30:00.169747 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.169755 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="extract-content" Feb 16 15:30:00 crc kubenswrapper[4748]: E0216 15:30:00.169782 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.169793 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="extract-utilities" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.170042 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ba0b97a-9999-4e68-9ffe-18ac69bcf666" containerName="registry-server" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.170991 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.173813 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.174099 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.187368 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx"] Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.285868 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-secret-volume\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.286174 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-config-volume\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.286388 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkvwl\" (UniqueName: \"kubernetes.io/projected/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-kube-api-access-dkvwl\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.388117 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkvwl\" (UniqueName: \"kubernetes.io/projected/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-kube-api-access-dkvwl\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.388293 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-secret-volume\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.388507 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-config-volume\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.390440 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-config-volume\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.400236 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-secret-volume\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.420075 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkvwl\" (UniqueName: \"kubernetes.io/projected/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-kube-api-access-dkvwl\") pod \"collect-profiles-29520930-vs9cx\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:00 crc kubenswrapper[4748]: I0216 15:30:00.493248 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:01 crc kubenswrapper[4748]: I0216 15:30:01.037469 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx"] Feb 16 15:30:01 crc kubenswrapper[4748]: I0216 15:30:01.317134 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" event={"ID":"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12","Type":"ContainerStarted","Data":"33ef86d38ab79fe50796a3dcdd8626204c398559300733035a1feacdada1f5f7"} Feb 16 15:30:01 crc kubenswrapper[4748]: I0216 15:30:01.317189 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" event={"ID":"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12","Type":"ContainerStarted","Data":"31ec9fb0ba969bbe23fce2501248686ff176c7cfbedfb059ba71c9173dbf73de"} Feb 16 15:30:01 crc kubenswrapper[4748]: I0216 15:30:01.348553 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" podStartSLOduration=1.348533592 podStartE2EDuration="1.348533592s" podCreationTimestamp="2026-02-16 15:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 15:30:01.338825442 +0000 UTC m=+2227.030494491" watchObservedRunningTime="2026-02-16 15:30:01.348533592 +0000 UTC m=+2227.040202661" Feb 16 15:30:02 crc kubenswrapper[4748]: I0216 15:30:02.362914 4748 generic.go:334] "Generic (PLEG): container finished" podID="432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12" containerID="33ef86d38ab79fe50796a3dcdd8626204c398559300733035a1feacdada1f5f7" exitCode=0 Feb 16 15:30:02 crc kubenswrapper[4748]: I0216 15:30:02.363123 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" event={"ID":"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12","Type":"ContainerDied","Data":"33ef86d38ab79fe50796a3dcdd8626204c398559300733035a1feacdada1f5f7"} Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.750180 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.872628 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkvwl\" (UniqueName: \"kubernetes.io/projected/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-kube-api-access-dkvwl\") pod \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.872765 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-config-volume\") pod \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.872799 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-secret-volume\") pod \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\" (UID: \"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12\") " Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.873240 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-config-volume" (OuterVolumeSpecName: "config-volume") pod "432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12" (UID: "432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.878878 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-kube-api-access-dkvwl" (OuterVolumeSpecName: "kube-api-access-dkvwl") pod "432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12" (UID: "432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12"). InnerVolumeSpecName "kube-api-access-dkvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.879087 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12" (UID: "432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.975260 4748 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.975849 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkvwl\" (UniqueName: \"kubernetes.io/projected/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-kube-api-access-dkvwl\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:03 crc kubenswrapper[4748]: I0216 15:30:03.975969 4748 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:30:04 crc kubenswrapper[4748]: I0216 15:30:04.392784 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" event={"ID":"432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12","Type":"ContainerDied","Data":"31ec9fb0ba969bbe23fce2501248686ff176c7cfbedfb059ba71c9173dbf73de"} Feb 16 15:30:04 crc kubenswrapper[4748]: I0216 15:30:04.392876 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ec9fb0ba969bbe23fce2501248686ff176c7cfbedfb059ba71c9173dbf73de" Feb 16 15:30:04 crc kubenswrapper[4748]: I0216 15:30:04.392990 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520930-vs9cx" Feb 16 15:30:04 crc kubenswrapper[4748]: I0216 15:30:04.436631 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m"] Feb 16 15:30:04 crc kubenswrapper[4748]: I0216 15:30:04.448668 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520885-6j92m"] Feb 16 15:30:05 crc kubenswrapper[4748]: I0216 15:30:05.014855 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4609715-d418-4f84-843a-b916f5e920ec" path="/var/lib/kubelet/pods/d4609715-d418-4f84-843a-b916f5e920ec/volumes" Feb 16 15:30:13 crc kubenswrapper[4748]: E0216 15:30:13.998062 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:30:21 crc kubenswrapper[4748]: I0216 15:30:21.519570 4748 scope.go:117] "RemoveContainer" containerID="85d6648dd409023222d97b7ee859c9058b3359b0e4bfe60ae3363dea3fe6d032" Feb 16 15:30:25 crc kubenswrapper[4748]: E0216 15:30:25.996052 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:30:40 crc kubenswrapper[4748]: E0216 15:30:40.000563 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:30:55 crc kubenswrapper[4748]: E0216 15:30:55.012961 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:31:07 crc kubenswrapper[4748]: E0216 15:31:07.998200 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:31:22 crc kubenswrapper[4748]: E0216 15:31:22.997677 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:31:34 crc kubenswrapper[4748]: I0216 15:31:34.729544 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:31:34 crc kubenswrapper[4748]: I0216 15:31:34.730457 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:31:35 crc kubenswrapper[4748]: E0216 15:31:35.997533 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:31:47 crc kubenswrapper[4748]: E0216 15:31:47.997267 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:31:59 crc kubenswrapper[4748]: E0216 15:31:59.997013 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:32:04 crc kubenswrapper[4748]: I0216 15:32:04.730111 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:32:04 crc kubenswrapper[4748]: I0216 15:32:04.730503 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:32:13 crc kubenswrapper[4748]: E0216 15:32:13.997387 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:32:25 crc kubenswrapper[4748]: E0216 15:32:25.011554 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:32:34 crc kubenswrapper[4748]: I0216 15:32:34.729217 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:32:34 crc kubenswrapper[4748]: I0216 15:32:34.729754 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:32:34 crc kubenswrapper[4748]: I0216 15:32:34.729804 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:32:34 crc kubenswrapper[4748]: I0216 15:32:34.730554 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:32:34 crc kubenswrapper[4748]: I0216 15:32:34.730631 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" gracePeriod=600 Feb 16 15:32:34 crc kubenswrapper[4748]: E0216 15:32:34.862070 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:32:35 crc kubenswrapper[4748]: I0216 15:32:35.210755 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" exitCode=0 Feb 16 15:32:35 crc kubenswrapper[4748]: I0216 15:32:35.210801 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98"} Feb 16 15:32:35 crc kubenswrapper[4748]: I0216 15:32:35.210836 4748 scope.go:117] "RemoveContainer" containerID="c93f2141003145d5fc8af431803d3dded9dab52304d897b73534788df6935054" Feb 16 15:32:35 crc kubenswrapper[4748]: I0216 15:32:35.211510 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:32:35 crc kubenswrapper[4748]: E0216 15:32:35.211795 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:32:35 crc kubenswrapper[4748]: E0216 15:32:35.995782 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:32:46 crc kubenswrapper[4748]: I0216 15:32:46.994658 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:32:47 crc kubenswrapper[4748]: E0216 15:32:46.995826 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:32:48 crc kubenswrapper[4748]: E0216 15:32:48.997243 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:33:01 crc kubenswrapper[4748]: E0216 15:33:00.999881 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:33:01 crc kubenswrapper[4748]: I0216 15:33:01.995093 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:33:01 crc kubenswrapper[4748]: E0216 15:33:01.995778 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:33:12 crc kubenswrapper[4748]: I0216 15:33:12.995300 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:33:12 crc kubenswrapper[4748]: E0216 15:33:12.996641 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:33:15 crc kubenswrapper[4748]: E0216 15:33:15.997167 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:33:26 crc kubenswrapper[4748]: I0216 15:33:26.994983 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:33:26 crc kubenswrapper[4748]: E0216 15:33:26.996070 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:33:28 crc kubenswrapper[4748]: E0216 15:33:28.997497 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:33:41 crc kubenswrapper[4748]: I0216 15:33:41.997013 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:33:41 crc kubenswrapper[4748]: E0216 15:33:41.998197 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:33:42 crc kubenswrapper[4748]: E0216 15:33:42.997666 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:33:54 crc kubenswrapper[4748]: E0216 15:33:53.997247 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:33:56 crc kubenswrapper[4748]: I0216 15:33:56.994925 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:33:56 crc kubenswrapper[4748]: E0216 15:33:56.996062 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:34:07 crc kubenswrapper[4748]: I0216 15:34:07.995297 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:34:07 crc kubenswrapper[4748]: E0216 15:34:07.996593 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:34:09 crc kubenswrapper[4748]: E0216 15:34:08.998790 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:34:16 crc kubenswrapper[4748]: I0216 15:34:16.968526 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wcr45"] Feb 16 15:34:16 crc kubenswrapper[4748]: E0216 15:34:16.969621 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12" containerName="collect-profiles" Feb 16 15:34:16 crc kubenswrapper[4748]: I0216 15:34:16.969638 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12" containerName="collect-profiles" Feb 16 15:34:16 crc kubenswrapper[4748]: I0216 15:34:16.969901 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="432ce8c3-08ba-4dfa-9d8a-b5bb6cdffe12" containerName="collect-profiles" Feb 16 15:34:16 crc kubenswrapper[4748]: I0216 15:34:16.971743 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:16 crc kubenswrapper[4748]: I0216 15:34:16.982684 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcr45"] Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.117822 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-utilities\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.117917 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrx2j\" (UniqueName: \"kubernetes.io/projected/43c9619e-247f-4f17-9ed7-920385ac4560-kube-api-access-jrx2j\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.118100 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-catalog-content\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.219669 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrx2j\" (UniqueName: \"kubernetes.io/projected/43c9619e-247f-4f17-9ed7-920385ac4560-kube-api-access-jrx2j\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.219818 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-catalog-content\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.220556 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-catalog-content\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.220596 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-utilities\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.220570 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-utilities\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.247445 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrx2j\" (UniqueName: \"kubernetes.io/projected/43c9619e-247f-4f17-9ed7-920385ac4560-kube-api-access-jrx2j\") pod \"redhat-marketplace-wcr45\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.296184 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:17 crc kubenswrapper[4748]: I0216 15:34:17.816299 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcr45"] Feb 16 15:34:18 crc kubenswrapper[4748]: I0216 15:34:18.364298 4748 generic.go:334] "Generic (PLEG): container finished" podID="43c9619e-247f-4f17-9ed7-920385ac4560" containerID="9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c" exitCode=0 Feb 16 15:34:18 crc kubenswrapper[4748]: I0216 15:34:18.364375 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcr45" event={"ID":"43c9619e-247f-4f17-9ed7-920385ac4560","Type":"ContainerDied","Data":"9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c"} Feb 16 15:34:18 crc kubenswrapper[4748]: I0216 15:34:18.364669 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcr45" event={"ID":"43c9619e-247f-4f17-9ed7-920385ac4560","Type":"ContainerStarted","Data":"9e81d76ac039c9a3d90060f4a3cae9a889dea0074390f7a5b3e1ad0b0b69cf45"} Feb 16 15:34:19 crc kubenswrapper[4748]: I0216 15:34:19.384008 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcr45" event={"ID":"43c9619e-247f-4f17-9ed7-920385ac4560","Type":"ContainerStarted","Data":"66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071"} Feb 16 15:34:19 crc kubenswrapper[4748]: E0216 15:34:19.641203 4748 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c9619e_247f_4f17_9ed7_920385ac4560.slice/crio-66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43c9619e_247f_4f17_9ed7_920385ac4560.slice/crio-conmon-66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071.scope\": RecentStats: unable to find data in memory cache]" Feb 16 15:34:19 crc kubenswrapper[4748]: I0216 15:34:19.994550 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:34:19 crc kubenswrapper[4748]: E0216 15:34:19.995107 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:34:19 crc kubenswrapper[4748]: E0216 15:34:19.996572 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:34:20 crc kubenswrapper[4748]: I0216 15:34:20.402708 4748 generic.go:334] "Generic (PLEG): container finished" podID="43c9619e-247f-4f17-9ed7-920385ac4560" containerID="66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071" exitCode=0 Feb 16 15:34:20 crc kubenswrapper[4748]: I0216 15:34:20.402825 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcr45" event={"ID":"43c9619e-247f-4f17-9ed7-920385ac4560","Type":"ContainerDied","Data":"66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071"} Feb 16 15:34:20 crc kubenswrapper[4748]: I0216 15:34:20.406605 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:34:21 crc kubenswrapper[4748]: I0216 15:34:21.416522 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcr45" event={"ID":"43c9619e-247f-4f17-9ed7-920385ac4560","Type":"ContainerStarted","Data":"4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854"} Feb 16 15:34:21 crc kubenswrapper[4748]: I0216 15:34:21.445453 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wcr45" podStartSLOduration=2.964725246 podStartE2EDuration="5.445418653s" podCreationTimestamp="2026-02-16 15:34:16 +0000 UTC" firstStartedPulling="2026-02-16 15:34:18.367183632 +0000 UTC m=+2484.058852701" lastFinishedPulling="2026-02-16 15:34:20.847877029 +0000 UTC m=+2486.539546108" observedRunningTime="2026-02-16 15:34:21.436609398 +0000 UTC m=+2487.128278447" watchObservedRunningTime="2026-02-16 15:34:21.445418653 +0000 UTC m=+2487.137087732" Feb 16 15:34:27 crc kubenswrapper[4748]: I0216 15:34:27.297231 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:27 crc kubenswrapper[4748]: I0216 15:34:27.297837 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:27 crc kubenswrapper[4748]: I0216 15:34:27.386253 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:27 crc kubenswrapper[4748]: I0216 15:34:27.564845 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:27 crc kubenswrapper[4748]: I0216 15:34:27.638886 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcr45"] Feb 16 15:34:29 crc kubenswrapper[4748]: I0216 15:34:29.534682 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wcr45" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="registry-server" containerID="cri-o://4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854" gracePeriod=2 Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.181294 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.349461 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-utilities\") pod \"43c9619e-247f-4f17-9ed7-920385ac4560\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.349508 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrx2j\" (UniqueName: \"kubernetes.io/projected/43c9619e-247f-4f17-9ed7-920385ac4560-kube-api-access-jrx2j\") pod \"43c9619e-247f-4f17-9ed7-920385ac4560\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.349557 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-catalog-content\") pod \"43c9619e-247f-4f17-9ed7-920385ac4560\" (UID: \"43c9619e-247f-4f17-9ed7-920385ac4560\") " Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.351020 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-utilities" (OuterVolumeSpecName: "utilities") pod "43c9619e-247f-4f17-9ed7-920385ac4560" (UID: "43c9619e-247f-4f17-9ed7-920385ac4560"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.355125 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.362180 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43c9619e-247f-4f17-9ed7-920385ac4560-kube-api-access-jrx2j" (OuterVolumeSpecName: "kube-api-access-jrx2j") pod "43c9619e-247f-4f17-9ed7-920385ac4560" (UID: "43c9619e-247f-4f17-9ed7-920385ac4560"). InnerVolumeSpecName "kube-api-access-jrx2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.372587 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "43c9619e-247f-4f17-9ed7-920385ac4560" (UID: "43c9619e-247f-4f17-9ed7-920385ac4560"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.457244 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrx2j\" (UniqueName: \"kubernetes.io/projected/43c9619e-247f-4f17-9ed7-920385ac4560-kube-api-access-jrx2j\") on node \"crc\" DevicePath \"\"" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.457298 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43c9619e-247f-4f17-9ed7-920385ac4560-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.548041 4748 generic.go:334] "Generic (PLEG): container finished" podID="43c9619e-247f-4f17-9ed7-920385ac4560" containerID="4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854" exitCode=0 Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.548082 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcr45" event={"ID":"43c9619e-247f-4f17-9ed7-920385ac4560","Type":"ContainerDied","Data":"4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854"} Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.548115 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wcr45" event={"ID":"43c9619e-247f-4f17-9ed7-920385ac4560","Type":"ContainerDied","Data":"9e81d76ac039c9a3d90060f4a3cae9a889dea0074390f7a5b3e1ad0b0b69cf45"} Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.548133 4748 scope.go:117] "RemoveContainer" containerID="4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.548152 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wcr45" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.576953 4748 scope.go:117] "RemoveContainer" containerID="66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.614281 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcr45"] Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.629764 4748 scope.go:117] "RemoveContainer" containerID="9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.630290 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wcr45"] Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.690673 4748 scope.go:117] "RemoveContainer" containerID="4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854" Feb 16 15:34:30 crc kubenswrapper[4748]: E0216 15:34:30.691438 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854\": container with ID starting with 4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854 not found: ID does not exist" containerID="4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.691496 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854"} err="failed to get container status \"4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854\": rpc error: code = NotFound desc = could not find container \"4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854\": container with ID starting with 4687ff5f770f76f5305befa8db7df4db3da247eb75b6462d1588c101ac671854 not found: ID does not exist" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.691515 4748 scope.go:117] "RemoveContainer" containerID="66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071" Feb 16 15:34:30 crc kubenswrapper[4748]: E0216 15:34:30.691945 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071\": container with ID starting with 66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071 not found: ID does not exist" containerID="66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.691974 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071"} err="failed to get container status \"66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071\": rpc error: code = NotFound desc = could not find container \"66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071\": container with ID starting with 66ff36c524e8e354371781b5ba2445f24961920aafe1adaee02a2dcfd601a071 not found: ID does not exist" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.691991 4748 scope.go:117] "RemoveContainer" containerID="9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c" Feb 16 15:34:30 crc kubenswrapper[4748]: E0216 15:34:30.692285 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c\": container with ID starting with 9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c not found: ID does not exist" containerID="9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c" Feb 16 15:34:30 crc kubenswrapper[4748]: I0216 15:34:30.692312 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c"} err="failed to get container status \"9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c\": rpc error: code = NotFound desc = could not find container \"9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c\": container with ID starting with 9430e988d6b5b4943ab80b8f3e349db7b09fdd4ad3f58425c4f1e491a9c2491c not found: ID does not exist" Feb 16 15:34:31 crc kubenswrapper[4748]: I0216 15:34:31.011638 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" path="/var/lib/kubelet/pods/43c9619e-247f-4f17-9ed7-920385ac4560/volumes" Feb 16 15:34:31 crc kubenswrapper[4748]: E0216 15:34:31.110517 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:34:31 crc kubenswrapper[4748]: E0216 15:34:31.111470 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:34:31 crc kubenswrapper[4748]: E0216 15:34:31.112281 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:34:31 crc kubenswrapper[4748]: E0216 15:34:31.113818 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:34:35 crc kubenswrapper[4748]: I0216 15:34:35.006787 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:34:35 crc kubenswrapper[4748]: E0216 15:34:35.007391 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:34:41 crc kubenswrapper[4748]: E0216 15:34:41.996882 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:34:45 crc kubenswrapper[4748]: I0216 15:34:45.995832 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:34:45 crc kubenswrapper[4748]: E0216 15:34:45.996689 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:34:52 crc kubenswrapper[4748]: E0216 15:34:52.997911 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:35:00 crc kubenswrapper[4748]: I0216 15:35:00.994639 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:35:00 crc kubenswrapper[4748]: E0216 15:35:00.996963 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:35:07 crc kubenswrapper[4748]: E0216 15:35:07.996691 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:35:11 crc kubenswrapper[4748]: I0216 15:35:11.993915 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:35:11 crc kubenswrapper[4748]: E0216 15:35:11.994644 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:35:22 crc kubenswrapper[4748]: E0216 15:35:22.997256 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:35:23 crc kubenswrapper[4748]: I0216 15:35:23.996449 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:35:23 crc kubenswrapper[4748]: E0216 15:35:23.997050 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:35:33 crc kubenswrapper[4748]: E0216 15:35:33.996340 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:35:35 crc kubenswrapper[4748]: I0216 15:35:35.994784 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:35:35 crc kubenswrapper[4748]: E0216 15:35:35.995562 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:35:46 crc kubenswrapper[4748]: I0216 15:35:46.994237 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:35:46 crc kubenswrapper[4748]: E0216 15:35:46.995135 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:35:48 crc kubenswrapper[4748]: E0216 15:35:48.997775 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:35:59 crc kubenswrapper[4748]: I0216 15:35:59.995355 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:35:59 crc kubenswrapper[4748]: E0216 15:35:59.996471 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:35:59 crc kubenswrapper[4748]: E0216 15:35:59.996857 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:36:10 crc kubenswrapper[4748]: I0216 15:36:10.994570 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:36:10 crc kubenswrapper[4748]: E0216 15:36:10.995541 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:36:13 crc kubenswrapper[4748]: E0216 15:36:13.998677 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:36:23 crc kubenswrapper[4748]: I0216 15:36:23.995075 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:36:23 crc kubenswrapper[4748]: E0216 15:36:23.995880 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.222115 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-srktp"] Feb 16 15:36:24 crc kubenswrapper[4748]: E0216 15:36:24.223028 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="extract-utilities" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.223052 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="extract-utilities" Feb 16 15:36:24 crc kubenswrapper[4748]: E0216 15:36:24.223087 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="registry-server" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.223096 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="registry-server" Feb 16 15:36:24 crc kubenswrapper[4748]: E0216 15:36:24.223130 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="extract-content" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.223138 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="extract-content" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.223366 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="43c9619e-247f-4f17-9ed7-920385ac4560" containerName="registry-server" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.225123 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.235340 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srktp"] Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.358670 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-catalog-content\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.359079 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-utilities\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.359201 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42jb4\" (UniqueName: \"kubernetes.io/projected/5392ac69-72ed-4e9c-a0a1-31d961263fcc-kube-api-access-42jb4\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.461526 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-catalog-content\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.461741 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-utilities\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.461794 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42jb4\" (UniqueName: \"kubernetes.io/projected/5392ac69-72ed-4e9c-a0a1-31d961263fcc-kube-api-access-42jb4\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.462129 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-catalog-content\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.462206 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-utilities\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.498363 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42jb4\" (UniqueName: \"kubernetes.io/projected/5392ac69-72ed-4e9c-a0a1-31d961263fcc-kube-api-access-42jb4\") pod \"redhat-operators-srktp\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:24 crc kubenswrapper[4748]: I0216 15:36:24.553598 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:25 crc kubenswrapper[4748]: I0216 15:36:25.082305 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-srktp"] Feb 16 15:36:25 crc kubenswrapper[4748]: I0216 15:36:25.834746 4748 generic.go:334] "Generic (PLEG): container finished" podID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerID="0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310" exitCode=0 Feb 16 15:36:25 crc kubenswrapper[4748]: I0216 15:36:25.834812 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srktp" event={"ID":"5392ac69-72ed-4e9c-a0a1-31d961263fcc","Type":"ContainerDied","Data":"0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310"} Feb 16 15:36:25 crc kubenswrapper[4748]: I0216 15:36:25.835104 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srktp" event={"ID":"5392ac69-72ed-4e9c-a0a1-31d961263fcc","Type":"ContainerStarted","Data":"4b2c0f5d0d8bdf61c03625334e7179b52553f28cfae006694b0baa74c89f6725"} Feb 16 15:36:26 crc kubenswrapper[4748]: I0216 15:36:26.846143 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srktp" event={"ID":"5392ac69-72ed-4e9c-a0a1-31d961263fcc","Type":"ContainerStarted","Data":"04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41"} Feb 16 15:36:27 crc kubenswrapper[4748]: I0216 15:36:27.861687 4748 generic.go:334] "Generic (PLEG): container finished" podID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerID="04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41" exitCode=0 Feb 16 15:36:27 crc kubenswrapper[4748]: I0216 15:36:27.861854 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srktp" event={"ID":"5392ac69-72ed-4e9c-a0a1-31d961263fcc","Type":"ContainerDied","Data":"04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41"} Feb 16 15:36:27 crc kubenswrapper[4748]: E0216 15:36:27.996481 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:36:28 crc kubenswrapper[4748]: I0216 15:36:28.881170 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srktp" event={"ID":"5392ac69-72ed-4e9c-a0a1-31d961263fcc","Type":"ContainerStarted","Data":"4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f"} Feb 16 15:36:28 crc kubenswrapper[4748]: I0216 15:36:28.911070 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-srktp" podStartSLOduration=2.372233578 podStartE2EDuration="4.911045873s" podCreationTimestamp="2026-02-16 15:36:24 +0000 UTC" firstStartedPulling="2026-02-16 15:36:25.836703467 +0000 UTC m=+2611.528372506" lastFinishedPulling="2026-02-16 15:36:28.375515752 +0000 UTC m=+2614.067184801" observedRunningTime="2026-02-16 15:36:28.900572707 +0000 UTC m=+2614.592241786" watchObservedRunningTime="2026-02-16 15:36:28.911045873 +0000 UTC m=+2614.602714942" Feb 16 15:36:34 crc kubenswrapper[4748]: I0216 15:36:34.554842 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:34 crc kubenswrapper[4748]: I0216 15:36:34.555805 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:35 crc kubenswrapper[4748]: I0216 15:36:35.624016 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-srktp" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="registry-server" probeResult="failure" output=< Feb 16 15:36:35 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:36:35 crc kubenswrapper[4748]: > Feb 16 15:36:35 crc kubenswrapper[4748]: I0216 15:36:35.995448 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:36:35 crc kubenswrapper[4748]: E0216 15:36:35.996006 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:36:40 crc kubenswrapper[4748]: E0216 15:36:40.996919 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:36:44 crc kubenswrapper[4748]: I0216 15:36:44.643605 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:44 crc kubenswrapper[4748]: I0216 15:36:44.720647 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:44 crc kubenswrapper[4748]: I0216 15:36:44.908674 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-srktp"] Feb 16 15:36:46 crc kubenswrapper[4748]: I0216 15:36:46.126039 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-srktp" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="registry-server" containerID="cri-o://4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f" gracePeriod=2 Feb 16 15:36:46 crc kubenswrapper[4748]: I0216 15:36:46.774902 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:46 crc kubenswrapper[4748]: I0216 15:36:46.920771 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42jb4\" (UniqueName: \"kubernetes.io/projected/5392ac69-72ed-4e9c-a0a1-31d961263fcc-kube-api-access-42jb4\") pod \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " Feb 16 15:36:46 crc kubenswrapper[4748]: I0216 15:36:46.920993 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-catalog-content\") pod \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " Feb 16 15:36:46 crc kubenswrapper[4748]: I0216 15:36:46.921090 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-utilities\") pod \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\" (UID: \"5392ac69-72ed-4e9c-a0a1-31d961263fcc\") " Feb 16 15:36:46 crc kubenswrapper[4748]: I0216 15:36:46.922278 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-utilities" (OuterVolumeSpecName: "utilities") pod "5392ac69-72ed-4e9c-a0a1-31d961263fcc" (UID: "5392ac69-72ed-4e9c-a0a1-31d961263fcc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:36:46 crc kubenswrapper[4748]: I0216 15:36:46.926329 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5392ac69-72ed-4e9c-a0a1-31d961263fcc-kube-api-access-42jb4" (OuterVolumeSpecName: "kube-api-access-42jb4") pod "5392ac69-72ed-4e9c-a0a1-31d961263fcc" (UID: "5392ac69-72ed-4e9c-a0a1-31d961263fcc"). InnerVolumeSpecName "kube-api-access-42jb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.023534 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.023561 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42jb4\" (UniqueName: \"kubernetes.io/projected/5392ac69-72ed-4e9c-a0a1-31d961263fcc-kube-api-access-42jb4\") on node \"crc\" DevicePath \"\"" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.086389 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5392ac69-72ed-4e9c-a0a1-31d961263fcc" (UID: "5392ac69-72ed-4e9c-a0a1-31d961263fcc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.128070 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5392ac69-72ed-4e9c-a0a1-31d961263fcc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.142374 4748 generic.go:334] "Generic (PLEG): container finished" podID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerID="4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f" exitCode=0 Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.142421 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-srktp" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.142443 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srktp" event={"ID":"5392ac69-72ed-4e9c-a0a1-31d961263fcc","Type":"ContainerDied","Data":"4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f"} Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.142905 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-srktp" event={"ID":"5392ac69-72ed-4e9c-a0a1-31d961263fcc","Type":"ContainerDied","Data":"4b2c0f5d0d8bdf61c03625334e7179b52553f28cfae006694b0baa74c89f6725"} Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.142928 4748 scope.go:117] "RemoveContainer" containerID="4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.167398 4748 scope.go:117] "RemoveContainer" containerID="04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.196899 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-srktp"] Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.202945 4748 scope.go:117] "RemoveContainer" containerID="0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.206227 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-srktp"] Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.250983 4748 scope.go:117] "RemoveContainer" containerID="4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f" Feb 16 15:36:47 crc kubenswrapper[4748]: E0216 15:36:47.251301 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f\": container with ID starting with 4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f not found: ID does not exist" containerID="4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.251327 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f"} err="failed to get container status \"4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f\": rpc error: code = NotFound desc = could not find container \"4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f\": container with ID starting with 4e6e8bd5814c29a437fe0fc890879f7925b070b1e46978d54466d0cfb5bb3d5f not found: ID does not exist" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.251350 4748 scope.go:117] "RemoveContainer" containerID="04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41" Feb 16 15:36:47 crc kubenswrapper[4748]: E0216 15:36:47.251565 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41\": container with ID starting with 04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41 not found: ID does not exist" containerID="04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.251599 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41"} err="failed to get container status \"04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41\": rpc error: code = NotFound desc = could not find container \"04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41\": container with ID starting with 04f072bc1200212048f1181db11b1fec301c0c42791d91661b43e6f31357de41 not found: ID does not exist" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.251619 4748 scope.go:117] "RemoveContainer" containerID="0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310" Feb 16 15:36:47 crc kubenswrapper[4748]: E0216 15:36:47.251955 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310\": container with ID starting with 0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310 not found: ID does not exist" containerID="0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.251976 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310"} err="failed to get container status \"0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310\": rpc error: code = NotFound desc = could not find container \"0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310\": container with ID starting with 0ab672fbe9badbab31fde81912796368eb2371f870b134d96557b1062d1ae310 not found: ID does not exist" Feb 16 15:36:47 crc kubenswrapper[4748]: I0216 15:36:47.995029 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:36:47 crc kubenswrapper[4748]: E0216 15:36:47.995554 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:36:49 crc kubenswrapper[4748]: I0216 15:36:49.022182 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" path="/var/lib/kubelet/pods/5392ac69-72ed-4e9c-a0a1-31d961263fcc/volumes" Feb 16 15:36:52 crc kubenswrapper[4748]: E0216 15:36:52.997666 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:37:02 crc kubenswrapper[4748]: I0216 15:37:02.995324 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:37:02 crc kubenswrapper[4748]: E0216 15:37:02.996880 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:37:05 crc kubenswrapper[4748]: E0216 15:37:05.010867 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:37:15 crc kubenswrapper[4748]: I0216 15:37:15.008851 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:37:15 crc kubenswrapper[4748]: E0216 15:37:15.009859 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:37:16 crc kubenswrapper[4748]: E0216 15:37:16.997324 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:37:29 crc kubenswrapper[4748]: I0216 15:37:29.996665 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:37:29 crc kubenswrapper[4748]: E0216 15:37:29.997931 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:37:31 crc kubenswrapper[4748]: E0216 15:37:31.999406 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:37:45 crc kubenswrapper[4748]: I0216 15:37:45.001465 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:37:45 crc kubenswrapper[4748]: I0216 15:37:45.888243 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"dcc28193efc042b0cad62ac5964fcb9ef5a63df6cd09713732e0efa9098a5d5a"} Feb 16 15:37:45 crc kubenswrapper[4748]: E0216 15:37:45.995618 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.294314 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zjkdx"] Feb 16 15:37:58 crc kubenswrapper[4748]: E0216 15:37:58.295636 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="extract-content" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.295660 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="extract-content" Feb 16 15:37:58 crc kubenswrapper[4748]: E0216 15:37:58.295690 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="extract-utilities" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.295704 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="extract-utilities" Feb 16 15:37:58 crc kubenswrapper[4748]: E0216 15:37:58.295771 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="registry-server" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.295784 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="registry-server" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.296125 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="5392ac69-72ed-4e9c-a0a1-31d961263fcc" containerName="registry-server" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.299003 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.314485 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjkdx"] Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.408804 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-catalog-content\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.408868 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-utilities\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.408997 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8xnq\" (UniqueName: \"kubernetes.io/projected/ed4bf73e-6312-40a6-9137-2adbdcc8111c-kube-api-access-f8xnq\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.510673 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-catalog-content\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.510776 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-utilities\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.510885 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8xnq\" (UniqueName: \"kubernetes.io/projected/ed4bf73e-6312-40a6-9137-2adbdcc8111c-kube-api-access-f8xnq\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.511406 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-catalog-content\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.511472 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-utilities\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.532420 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8xnq\" (UniqueName: \"kubernetes.io/projected/ed4bf73e-6312-40a6-9137-2adbdcc8111c-kube-api-access-f8xnq\") pod \"certified-operators-zjkdx\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: I0216 15:37:58.634443 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:37:58 crc kubenswrapper[4748]: E0216 15:37:58.996129 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:37:59 crc kubenswrapper[4748]: I0216 15:37:59.130700 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjkdx"] Feb 16 15:38:00 crc kubenswrapper[4748]: I0216 15:38:00.055313 4748 generic.go:334] "Generic (PLEG): container finished" podID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerID="5dd6729adfca98b0fc07a0ae904fe9f956e252a60a5928f19e7f53168f48be2e" exitCode=0 Feb 16 15:38:00 crc kubenswrapper[4748]: I0216 15:38:00.055384 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjkdx" event={"ID":"ed4bf73e-6312-40a6-9137-2adbdcc8111c","Type":"ContainerDied","Data":"5dd6729adfca98b0fc07a0ae904fe9f956e252a60a5928f19e7f53168f48be2e"} Feb 16 15:38:00 crc kubenswrapper[4748]: I0216 15:38:00.055879 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjkdx" event={"ID":"ed4bf73e-6312-40a6-9137-2adbdcc8111c","Type":"ContainerStarted","Data":"8ffb24924ec2357d4f71c8cacec2292552209f92eea4679391536be92ec056bb"} Feb 16 15:38:01 crc kubenswrapper[4748]: I0216 15:38:01.074076 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjkdx" event={"ID":"ed4bf73e-6312-40a6-9137-2adbdcc8111c","Type":"ContainerStarted","Data":"565ea6e3ecda362f0f9e0803258d12359817c58bda5acbf8ae67004ae0cb327b"} Feb 16 15:38:02 crc kubenswrapper[4748]: I0216 15:38:02.091003 4748 generic.go:334] "Generic (PLEG): container finished" podID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerID="565ea6e3ecda362f0f9e0803258d12359817c58bda5acbf8ae67004ae0cb327b" exitCode=0 Feb 16 15:38:02 crc kubenswrapper[4748]: I0216 15:38:02.091127 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjkdx" event={"ID":"ed4bf73e-6312-40a6-9137-2adbdcc8111c","Type":"ContainerDied","Data":"565ea6e3ecda362f0f9e0803258d12359817c58bda5acbf8ae67004ae0cb327b"} Feb 16 15:38:03 crc kubenswrapper[4748]: I0216 15:38:03.101396 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjkdx" event={"ID":"ed4bf73e-6312-40a6-9137-2adbdcc8111c","Type":"ContainerStarted","Data":"72d74b53f5fa253331c7ba2c2180fb3e6ceabf3111c593c873bb1f2144b8ed0a"} Feb 16 15:38:03 crc kubenswrapper[4748]: I0216 15:38:03.133574 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zjkdx" podStartSLOduration=2.6548285270000003 podStartE2EDuration="5.133548845s" podCreationTimestamp="2026-02-16 15:37:58 +0000 UTC" firstStartedPulling="2026-02-16 15:38:00.058293267 +0000 UTC m=+2705.749962306" lastFinishedPulling="2026-02-16 15:38:02.537013575 +0000 UTC m=+2708.228682624" observedRunningTime="2026-02-16 15:38:03.126271738 +0000 UTC m=+2708.817940777" watchObservedRunningTime="2026-02-16 15:38:03.133548845 +0000 UTC m=+2708.825217934" Feb 16 15:38:06 crc kubenswrapper[4748]: I0216 15:38:06.775264 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8v252"] Feb 16 15:38:06 crc kubenswrapper[4748]: I0216 15:38:06.780969 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:06 crc kubenswrapper[4748]: I0216 15:38:06.786048 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8v252"] Feb 16 15:38:06 crc kubenswrapper[4748]: I0216 15:38:06.903264 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-utilities\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:06 crc kubenswrapper[4748]: I0216 15:38:06.903317 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q49fs\" (UniqueName: \"kubernetes.io/projected/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-kube-api-access-q49fs\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:06 crc kubenswrapper[4748]: I0216 15:38:06.903593 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-catalog-content\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.005105 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-utilities\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.005167 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q49fs\" (UniqueName: \"kubernetes.io/projected/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-kube-api-access-q49fs\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.005307 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-catalog-content\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.005553 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-utilities\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.005893 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-catalog-content\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.027687 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q49fs\" (UniqueName: \"kubernetes.io/projected/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-kube-api-access-q49fs\") pod \"community-operators-8v252\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.104677 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:07 crc kubenswrapper[4748]: I0216 15:38:07.678098 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8v252"] Feb 16 15:38:07 crc kubenswrapper[4748]: W0216 15:38:07.686481 4748 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bed6e8b_1429_4430_b6bd_6a1683f9f78a.slice/crio-810ee5c8a75608f5925d046e19bb592eca40835fc1a220dde4d25b3c88a1c297 WatchSource:0}: Error finding container 810ee5c8a75608f5925d046e19bb592eca40835fc1a220dde4d25b3c88a1c297: Status 404 returned error can't find the container with id 810ee5c8a75608f5925d046e19bb592eca40835fc1a220dde4d25b3c88a1c297 Feb 16 15:38:08 crc kubenswrapper[4748]: I0216 15:38:08.153457 4748 generic.go:334] "Generic (PLEG): container finished" podID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerID="350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c" exitCode=0 Feb 16 15:38:08 crc kubenswrapper[4748]: I0216 15:38:08.153503 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v252" event={"ID":"9bed6e8b-1429-4430-b6bd-6a1683f9f78a","Type":"ContainerDied","Data":"350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c"} Feb 16 15:38:08 crc kubenswrapper[4748]: I0216 15:38:08.153536 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v252" event={"ID":"9bed6e8b-1429-4430-b6bd-6a1683f9f78a","Type":"ContainerStarted","Data":"810ee5c8a75608f5925d046e19bb592eca40835fc1a220dde4d25b3c88a1c297"} Feb 16 15:38:08 crc kubenswrapper[4748]: I0216 15:38:08.634968 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:38:08 crc kubenswrapper[4748]: I0216 15:38:08.635444 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:38:08 crc kubenswrapper[4748]: I0216 15:38:08.730200 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:38:09 crc kubenswrapper[4748]: I0216 15:38:09.265278 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:38:10 crc kubenswrapper[4748]: I0216 15:38:10.180683 4748 generic.go:334] "Generic (PLEG): container finished" podID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerID="cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79" exitCode=0 Feb 16 15:38:10 crc kubenswrapper[4748]: I0216 15:38:10.180750 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v252" event={"ID":"9bed6e8b-1429-4430-b6bd-6a1683f9f78a","Type":"ContainerDied","Data":"cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79"} Feb 16 15:38:11 crc kubenswrapper[4748]: I0216 15:38:11.192922 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v252" event={"ID":"9bed6e8b-1429-4430-b6bd-6a1683f9f78a","Type":"ContainerStarted","Data":"349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5"} Feb 16 15:38:11 crc kubenswrapper[4748]: I0216 15:38:11.216203 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8v252" podStartSLOduration=2.599247899 podStartE2EDuration="5.216186381s" podCreationTimestamp="2026-02-16 15:38:06 +0000 UTC" firstStartedPulling="2026-02-16 15:38:08.157027805 +0000 UTC m=+2713.848696864" lastFinishedPulling="2026-02-16 15:38:10.773966307 +0000 UTC m=+2716.465635346" observedRunningTime="2026-02-16 15:38:11.210565464 +0000 UTC m=+2716.902234503" watchObservedRunningTime="2026-02-16 15:38:11.216186381 +0000 UTC m=+2716.907855420" Feb 16 15:38:11 crc kubenswrapper[4748]: I0216 15:38:11.925467 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjkdx"] Feb 16 15:38:11 crc kubenswrapper[4748]: I0216 15:38:11.926369 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zjkdx" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="registry-server" containerID="cri-o://72d74b53f5fa253331c7ba2c2180fb3e6ceabf3111c593c873bb1f2144b8ed0a" gracePeriod=2 Feb 16 15:38:11 crc kubenswrapper[4748]: E0216 15:38:11.998012 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.226940 4748 generic.go:334] "Generic (PLEG): container finished" podID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerID="72d74b53f5fa253331c7ba2c2180fb3e6ceabf3111c593c873bb1f2144b8ed0a" exitCode=0 Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.226999 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjkdx" event={"ID":"ed4bf73e-6312-40a6-9137-2adbdcc8111c","Type":"ContainerDied","Data":"72d74b53f5fa253331c7ba2c2180fb3e6ceabf3111c593c873bb1f2144b8ed0a"} Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.595385 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.673651 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-catalog-content\") pod \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.673884 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8xnq\" (UniqueName: \"kubernetes.io/projected/ed4bf73e-6312-40a6-9137-2adbdcc8111c-kube-api-access-f8xnq\") pod \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.674123 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-utilities\") pod \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\" (UID: \"ed4bf73e-6312-40a6-9137-2adbdcc8111c\") " Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.674929 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-utilities" (OuterVolumeSpecName: "utilities") pod "ed4bf73e-6312-40a6-9137-2adbdcc8111c" (UID: "ed4bf73e-6312-40a6-9137-2adbdcc8111c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.680638 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed4bf73e-6312-40a6-9137-2adbdcc8111c-kube-api-access-f8xnq" (OuterVolumeSpecName: "kube-api-access-f8xnq") pod "ed4bf73e-6312-40a6-9137-2adbdcc8111c" (UID: "ed4bf73e-6312-40a6-9137-2adbdcc8111c"). InnerVolumeSpecName "kube-api-access-f8xnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.720126 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ed4bf73e-6312-40a6-9137-2adbdcc8111c" (UID: "ed4bf73e-6312-40a6-9137-2adbdcc8111c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.776178 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8xnq\" (UniqueName: \"kubernetes.io/projected/ed4bf73e-6312-40a6-9137-2adbdcc8111c-kube-api-access-f8xnq\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.776209 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:12 crc kubenswrapper[4748]: I0216 15:38:12.776218 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ed4bf73e-6312-40a6-9137-2adbdcc8111c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:13 crc kubenswrapper[4748]: I0216 15:38:13.238634 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjkdx" event={"ID":"ed4bf73e-6312-40a6-9137-2adbdcc8111c","Type":"ContainerDied","Data":"8ffb24924ec2357d4f71c8cacec2292552209f92eea4679391536be92ec056bb"} Feb 16 15:38:13 crc kubenswrapper[4748]: I0216 15:38:13.238694 4748 scope.go:117] "RemoveContainer" containerID="72d74b53f5fa253331c7ba2c2180fb3e6ceabf3111c593c873bb1f2144b8ed0a" Feb 16 15:38:13 crc kubenswrapper[4748]: I0216 15:38:13.240216 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjkdx" Feb 16 15:38:13 crc kubenswrapper[4748]: I0216 15:38:13.271912 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjkdx"] Feb 16 15:38:13 crc kubenswrapper[4748]: I0216 15:38:13.279387 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zjkdx"] Feb 16 15:38:13 crc kubenswrapper[4748]: I0216 15:38:13.296567 4748 scope.go:117] "RemoveContainer" containerID="565ea6e3ecda362f0f9e0803258d12359817c58bda5acbf8ae67004ae0cb327b" Feb 16 15:38:13 crc kubenswrapper[4748]: I0216 15:38:13.317061 4748 scope.go:117] "RemoveContainer" containerID="5dd6729adfca98b0fc07a0ae904fe9f956e252a60a5928f19e7f53168f48be2e" Feb 16 15:38:15 crc kubenswrapper[4748]: I0216 15:38:15.014999 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" path="/var/lib/kubelet/pods/ed4bf73e-6312-40a6-9137-2adbdcc8111c/volumes" Feb 16 15:38:17 crc kubenswrapper[4748]: I0216 15:38:17.105042 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:17 crc kubenswrapper[4748]: I0216 15:38:17.105635 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:17 crc kubenswrapper[4748]: I0216 15:38:17.186810 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:17 crc kubenswrapper[4748]: I0216 15:38:17.340026 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:17 crc kubenswrapper[4748]: I0216 15:38:17.512976 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8v252"] Feb 16 15:38:19 crc kubenswrapper[4748]: I0216 15:38:19.315708 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8v252" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="registry-server" containerID="cri-o://349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5" gracePeriod=2 Feb 16 15:38:19 crc kubenswrapper[4748]: I0216 15:38:19.878636 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.036770 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q49fs\" (UniqueName: \"kubernetes.io/projected/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-kube-api-access-q49fs\") pod \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.036941 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-utilities\") pod \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.036965 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-catalog-content\") pod \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\" (UID: \"9bed6e8b-1429-4430-b6bd-6a1683f9f78a\") " Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.037967 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-utilities" (OuterVolumeSpecName: "utilities") pod "9bed6e8b-1429-4430-b6bd-6a1683f9f78a" (UID: "9bed6e8b-1429-4430-b6bd-6a1683f9f78a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.049920 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-kube-api-access-q49fs" (OuterVolumeSpecName: "kube-api-access-q49fs") pod "9bed6e8b-1429-4430-b6bd-6a1683f9f78a" (UID: "9bed6e8b-1429-4430-b6bd-6a1683f9f78a"). InnerVolumeSpecName "kube-api-access-q49fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.140799 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q49fs\" (UniqueName: \"kubernetes.io/projected/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-kube-api-access-q49fs\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.140855 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.159235 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9bed6e8b-1429-4430-b6bd-6a1683f9f78a" (UID: "9bed6e8b-1429-4430-b6bd-6a1683f9f78a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.243266 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9bed6e8b-1429-4430-b6bd-6a1683f9f78a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.330166 4748 generic.go:334] "Generic (PLEG): container finished" podID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerID="349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5" exitCode=0 Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.330215 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v252" event={"ID":"9bed6e8b-1429-4430-b6bd-6a1683f9f78a","Type":"ContainerDied","Data":"349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5"} Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.330247 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8v252" event={"ID":"9bed6e8b-1429-4430-b6bd-6a1683f9f78a","Type":"ContainerDied","Data":"810ee5c8a75608f5925d046e19bb592eca40835fc1a220dde4d25b3c88a1c297"} Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.330269 4748 scope.go:117] "RemoveContainer" containerID="349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.330306 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8v252" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.362682 4748 scope.go:117] "RemoveContainer" containerID="cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.390468 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8v252"] Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.395218 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8v252"] Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.395644 4748 scope.go:117] "RemoveContainer" containerID="350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.434051 4748 scope.go:117] "RemoveContainer" containerID="349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5" Feb 16 15:38:20 crc kubenswrapper[4748]: E0216 15:38:20.434661 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5\": container with ID starting with 349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5 not found: ID does not exist" containerID="349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.434701 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5"} err="failed to get container status \"349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5\": rpc error: code = NotFound desc = could not find container \"349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5\": container with ID starting with 349782b6159dce8cc20c6d0aa77d3329aebf1f6ea585e5da3991a8d4694e0ad5 not found: ID does not exist" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.434743 4748 scope.go:117] "RemoveContainer" containerID="cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79" Feb 16 15:38:20 crc kubenswrapper[4748]: E0216 15:38:20.435263 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79\": container with ID starting with cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79 not found: ID does not exist" containerID="cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.435293 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79"} err="failed to get container status \"cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79\": rpc error: code = NotFound desc = could not find container \"cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79\": container with ID starting with cdf238d366539a1e0a60292bf4d4e1f6d6669254db765a9525a6362504353e79 not found: ID does not exist" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.435309 4748 scope.go:117] "RemoveContainer" containerID="350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c" Feb 16 15:38:20 crc kubenswrapper[4748]: E0216 15:38:20.435630 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c\": container with ID starting with 350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c not found: ID does not exist" containerID="350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c" Feb 16 15:38:20 crc kubenswrapper[4748]: I0216 15:38:20.435653 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c"} err="failed to get container status \"350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c\": rpc error: code = NotFound desc = could not find container \"350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c\": container with ID starting with 350833808ba6d87d7e8c5e25291fe676b96064a03aa247a396f61cde3e10177c not found: ID does not exist" Feb 16 15:38:21 crc kubenswrapper[4748]: I0216 15:38:21.014034 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" path="/var/lib/kubelet/pods/9bed6e8b-1429-4430-b6bd-6a1683f9f78a/volumes" Feb 16 15:38:25 crc kubenswrapper[4748]: E0216 15:38:25.007107 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:38:35 crc kubenswrapper[4748]: E0216 15:38:35.998091 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:38:48 crc kubenswrapper[4748]: E0216 15:38:48.997919 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:39:01 crc kubenswrapper[4748]: E0216 15:39:01.998505 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:39:14 crc kubenswrapper[4748]: E0216 15:39:13.999959 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:39:27 crc kubenswrapper[4748]: E0216 15:39:27.996102 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:39:41 crc kubenswrapper[4748]: I0216 15:39:41.996230 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:39:42 crc kubenswrapper[4748]: E0216 15:39:42.128225 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:39:42 crc kubenswrapper[4748]: E0216 15:39:42.128288 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:39:42 crc kubenswrapper[4748]: E0216 15:39:42.128411 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:39:42 crc kubenswrapper[4748]: E0216 15:39:42.129665 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:39:57 crc kubenswrapper[4748]: E0216 15:39:56.999786 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:40:04 crc kubenswrapper[4748]: I0216 15:40:04.729734 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:40:04 crc kubenswrapper[4748]: I0216 15:40:04.730278 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:40:07 crc kubenswrapper[4748]: E0216 15:40:07.997017 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:40:19 crc kubenswrapper[4748]: E0216 15:40:19.996655 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:40:31 crc kubenswrapper[4748]: E0216 15:40:31.004500 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:40:34 crc kubenswrapper[4748]: I0216 15:40:34.730375 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:40:34 crc kubenswrapper[4748]: I0216 15:40:34.730887 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:40:43 crc kubenswrapper[4748]: E0216 15:40:43.998463 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:40:57 crc kubenswrapper[4748]: E0216 15:40:57.997305 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:41:04 crc kubenswrapper[4748]: I0216 15:41:04.729521 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:41:04 crc kubenswrapper[4748]: I0216 15:41:04.729956 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:41:04 crc kubenswrapper[4748]: I0216 15:41:04.730020 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:41:04 crc kubenswrapper[4748]: I0216 15:41:04.731004 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dcc28193efc042b0cad62ac5964fcb9ef5a63df6cd09713732e0efa9098a5d5a"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:41:04 crc kubenswrapper[4748]: I0216 15:41:04.731143 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://dcc28193efc042b0cad62ac5964fcb9ef5a63df6cd09713732e0efa9098a5d5a" gracePeriod=600 Feb 16 15:41:05 crc kubenswrapper[4748]: I0216 15:41:05.185962 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="dcc28193efc042b0cad62ac5964fcb9ef5a63df6cd09713732e0efa9098a5d5a" exitCode=0 Feb 16 15:41:05 crc kubenswrapper[4748]: I0216 15:41:05.186016 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"dcc28193efc042b0cad62ac5964fcb9ef5a63df6cd09713732e0efa9098a5d5a"} Feb 16 15:41:05 crc kubenswrapper[4748]: I0216 15:41:05.186402 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b"} Feb 16 15:41:05 crc kubenswrapper[4748]: I0216 15:41:05.186430 4748 scope.go:117] "RemoveContainer" containerID="e1349b76f062f7606526fc67cc597587f098fc79a6dd244312ad8117a9bd7f98" Feb 16 15:41:11 crc kubenswrapper[4748]: E0216 15:41:11.996625 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:41:22 crc kubenswrapper[4748]: E0216 15:41:22.998671 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:41:37 crc kubenswrapper[4748]: E0216 15:41:37.998583 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:41:49 crc kubenswrapper[4748]: E0216 15:41:49.997747 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:42:00 crc kubenswrapper[4748]: E0216 15:42:00.997245 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:42:12 crc kubenswrapper[4748]: E0216 15:42:12.997862 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.372095 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qfqlr/must-gather-g2lhk"] Feb 16 15:42:21 crc kubenswrapper[4748]: E0216 15:42:21.372849 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="extract-utilities" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.372871 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="extract-utilities" Feb 16 15:42:21 crc kubenswrapper[4748]: E0216 15:42:21.372900 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="extract-utilities" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.372908 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="extract-utilities" Feb 16 15:42:21 crc kubenswrapper[4748]: E0216 15:42:21.372926 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="registry-server" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.372933 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="registry-server" Feb 16 15:42:21 crc kubenswrapper[4748]: E0216 15:42:21.372955 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="extract-content" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.372961 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="extract-content" Feb 16 15:42:21 crc kubenswrapper[4748]: E0216 15:42:21.372970 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="registry-server" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.372975 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="registry-server" Feb 16 15:42:21 crc kubenswrapper[4748]: E0216 15:42:21.372985 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="extract-content" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.372991 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="extract-content" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.373156 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed4bf73e-6312-40a6-9137-2adbdcc8111c" containerName="registry-server" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.373169 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bed6e8b-1429-4430-b6bd-6a1683f9f78a" containerName="registry-server" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.374150 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.380035 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qfqlr"/"openshift-service-ca.crt" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.395517 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qfqlr/must-gather-g2lhk"] Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.398586 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qfqlr"/"kube-root-ca.crt" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.399241 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qfqlr"/"default-dockercfg-fnhkb" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.517015 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/04b82838-0ea9-48cb-9883-fa59c3fe3595-must-gather-output\") pod \"must-gather-g2lhk\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.517367 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc56l\" (UniqueName: \"kubernetes.io/projected/04b82838-0ea9-48cb-9883-fa59c3fe3595-kube-api-access-cc56l\") pod \"must-gather-g2lhk\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.619086 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/04b82838-0ea9-48cb-9883-fa59c3fe3595-must-gather-output\") pod \"must-gather-g2lhk\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.619247 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc56l\" (UniqueName: \"kubernetes.io/projected/04b82838-0ea9-48cb-9883-fa59c3fe3595-kube-api-access-cc56l\") pod \"must-gather-g2lhk\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.619541 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/04b82838-0ea9-48cb-9883-fa59c3fe3595-must-gather-output\") pod \"must-gather-g2lhk\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.647216 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc56l\" (UniqueName: \"kubernetes.io/projected/04b82838-0ea9-48cb-9883-fa59c3fe3595-kube-api-access-cc56l\") pod \"must-gather-g2lhk\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:21 crc kubenswrapper[4748]: I0216 15:42:21.691913 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:42:22 crc kubenswrapper[4748]: I0216 15:42:22.233140 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qfqlr/must-gather-g2lhk"] Feb 16 15:42:23 crc kubenswrapper[4748]: I0216 15:42:23.090887 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" event={"ID":"04b82838-0ea9-48cb-9883-fa59c3fe3595","Type":"ContainerStarted","Data":"a27839b3249d5a374bc4f6ef230a4bc784c9f6e338c1dfd3688af999409e650b"} Feb 16 15:42:25 crc kubenswrapper[4748]: E0216 15:42:25.997439 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:42:30 crc kubenswrapper[4748]: I0216 15:42:30.160696 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" event={"ID":"04b82838-0ea9-48cb-9883-fa59c3fe3595","Type":"ContainerStarted","Data":"9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98"} Feb 16 15:42:30 crc kubenswrapper[4748]: I0216 15:42:30.161447 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" event={"ID":"04b82838-0ea9-48cb-9883-fa59c3fe3595","Type":"ContainerStarted","Data":"31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c"} Feb 16 15:42:30 crc kubenswrapper[4748]: I0216 15:42:30.179705 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" podStartSLOduration=2.227179999 podStartE2EDuration="9.179685181s" podCreationTimestamp="2026-02-16 15:42:21 +0000 UTC" firstStartedPulling="2026-02-16 15:42:22.237903715 +0000 UTC m=+2967.929572764" lastFinishedPulling="2026-02-16 15:42:29.190408877 +0000 UTC m=+2974.882077946" observedRunningTime="2026-02-16 15:42:30.173983192 +0000 UTC m=+2975.865652261" watchObservedRunningTime="2026-02-16 15:42:30.179685181 +0000 UTC m=+2975.871354230" Feb 16 15:42:34 crc kubenswrapper[4748]: I0216 15:42:34.785628 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qfqlr/crc-debug-65ksn"] Feb 16 15:42:34 crc kubenswrapper[4748]: I0216 15:42:34.788144 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:34 crc kubenswrapper[4748]: I0216 15:42:34.945968 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/585b55f5-6369-4954-99a3-e9259d893803-host\") pod \"crc-debug-65ksn\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:34 crc kubenswrapper[4748]: I0216 15:42:34.946195 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft8jr\" (UniqueName: \"kubernetes.io/projected/585b55f5-6369-4954-99a3-e9259d893803-kube-api-access-ft8jr\") pod \"crc-debug-65ksn\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:35 crc kubenswrapper[4748]: I0216 15:42:35.047893 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft8jr\" (UniqueName: \"kubernetes.io/projected/585b55f5-6369-4954-99a3-e9259d893803-kube-api-access-ft8jr\") pod \"crc-debug-65ksn\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:35 crc kubenswrapper[4748]: I0216 15:42:35.048274 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/585b55f5-6369-4954-99a3-e9259d893803-host\") pod \"crc-debug-65ksn\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:35 crc kubenswrapper[4748]: I0216 15:42:35.048617 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/585b55f5-6369-4954-99a3-e9259d893803-host\") pod \"crc-debug-65ksn\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:35 crc kubenswrapper[4748]: I0216 15:42:35.080688 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft8jr\" (UniqueName: \"kubernetes.io/projected/585b55f5-6369-4954-99a3-e9259d893803-kube-api-access-ft8jr\") pod \"crc-debug-65ksn\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:35 crc kubenswrapper[4748]: I0216 15:42:35.113370 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:42:35 crc kubenswrapper[4748]: I0216 15:42:35.212223 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" event={"ID":"585b55f5-6369-4954-99a3-e9259d893803","Type":"ContainerStarted","Data":"7d8238f733a190db64c2a32a858d1c5edd9435ef51e3ab7330c7233228ea0ba1"} Feb 16 15:42:40 crc kubenswrapper[4748]: E0216 15:42:40.995816 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:42:47 crc kubenswrapper[4748]: I0216 15:42:47.348202 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" event={"ID":"585b55f5-6369-4954-99a3-e9259d893803","Type":"ContainerStarted","Data":"d3c60e8f605745466a404f4909b068c0a82d46fd0a7b104405db3b7feb49ce1a"} Feb 16 15:42:47 crc kubenswrapper[4748]: I0216 15:42:47.368729 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" podStartSLOduration=2.280182898 podStartE2EDuration="13.368705159s" podCreationTimestamp="2026-02-16 15:42:34 +0000 UTC" firstStartedPulling="2026-02-16 15:42:35.194227583 +0000 UTC m=+2980.885896622" lastFinishedPulling="2026-02-16 15:42:46.282749844 +0000 UTC m=+2991.974418883" observedRunningTime="2026-02-16 15:42:47.363326288 +0000 UTC m=+2993.054995327" watchObservedRunningTime="2026-02-16 15:42:47.368705159 +0000 UTC m=+2993.060374198" Feb 16 15:42:54 crc kubenswrapper[4748]: E0216 15:42:54.009731 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:43:01 crc kubenswrapper[4748]: I0216 15:43:01.471201 4748 generic.go:334] "Generic (PLEG): container finished" podID="585b55f5-6369-4954-99a3-e9259d893803" containerID="d3c60e8f605745466a404f4909b068c0a82d46fd0a7b104405db3b7feb49ce1a" exitCode=0 Feb 16 15:43:01 crc kubenswrapper[4748]: I0216 15:43:01.471267 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" event={"ID":"585b55f5-6369-4954-99a3-e9259d893803","Type":"ContainerDied","Data":"d3c60e8f605745466a404f4909b068c0a82d46fd0a7b104405db3b7feb49ce1a"} Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.605480 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.632125 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qfqlr/crc-debug-65ksn"] Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.639828 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qfqlr/crc-debug-65ksn"] Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.692173 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/585b55f5-6369-4954-99a3-e9259d893803-host\") pod \"585b55f5-6369-4954-99a3-e9259d893803\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.692297 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/585b55f5-6369-4954-99a3-e9259d893803-host" (OuterVolumeSpecName: "host") pod "585b55f5-6369-4954-99a3-e9259d893803" (UID: "585b55f5-6369-4954-99a3-e9259d893803"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.692465 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft8jr\" (UniqueName: \"kubernetes.io/projected/585b55f5-6369-4954-99a3-e9259d893803-kube-api-access-ft8jr\") pod \"585b55f5-6369-4954-99a3-e9259d893803\" (UID: \"585b55f5-6369-4954-99a3-e9259d893803\") " Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.692959 4748 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/585b55f5-6369-4954-99a3-e9259d893803-host\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.704899 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585b55f5-6369-4954-99a3-e9259d893803-kube-api-access-ft8jr" (OuterVolumeSpecName: "kube-api-access-ft8jr") pod "585b55f5-6369-4954-99a3-e9259d893803" (UID: "585b55f5-6369-4954-99a3-e9259d893803"). InnerVolumeSpecName "kube-api-access-ft8jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:43:02 crc kubenswrapper[4748]: I0216 15:43:02.795048 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft8jr\" (UniqueName: \"kubernetes.io/projected/585b55f5-6369-4954-99a3-e9259d893803-kube-api-access-ft8jr\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.012053 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585b55f5-6369-4954-99a3-e9259d893803" path="/var/lib/kubelet/pods/585b55f5-6369-4954-99a3-e9259d893803/volumes" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.513136 4748 scope.go:117] "RemoveContainer" containerID="d3c60e8f605745466a404f4909b068c0a82d46fd0a7b104405db3b7feb49ce1a" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.513402 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-65ksn" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.839939 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qfqlr/crc-debug-h75rq"] Feb 16 15:43:03 crc kubenswrapper[4748]: E0216 15:43:03.840528 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585b55f5-6369-4954-99a3-e9259d893803" containerName="container-00" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.840542 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="585b55f5-6369-4954-99a3-e9259d893803" containerName="container-00" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.840734 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="585b55f5-6369-4954-99a3-e9259d893803" containerName="container-00" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.841389 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.923435 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf2j8\" (UniqueName: \"kubernetes.io/projected/bfac4aaa-4b46-487b-867d-d9068ddf68f4-kube-api-access-lf2j8\") pod \"crc-debug-h75rq\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:03 crc kubenswrapper[4748]: I0216 15:43:03.923497 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfac4aaa-4b46-487b-867d-d9068ddf68f4-host\") pod \"crc-debug-h75rq\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.025431 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lf2j8\" (UniqueName: \"kubernetes.io/projected/bfac4aaa-4b46-487b-867d-d9068ddf68f4-kube-api-access-lf2j8\") pod \"crc-debug-h75rq\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.025504 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfac4aaa-4b46-487b-867d-d9068ddf68f4-host\") pod \"crc-debug-h75rq\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.025696 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfac4aaa-4b46-487b-867d-d9068ddf68f4-host\") pod \"crc-debug-h75rq\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.051550 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lf2j8\" (UniqueName: \"kubernetes.io/projected/bfac4aaa-4b46-487b-867d-d9068ddf68f4-kube-api-access-lf2j8\") pod \"crc-debug-h75rq\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.157011 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.523026 4748 generic.go:334] "Generic (PLEG): container finished" podID="bfac4aaa-4b46-487b-867d-d9068ddf68f4" containerID="7205935def2af3857dcd04b3edeb6e2a61e3f58db7b7abaf5aec1ee8833560e6" exitCode=1 Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.523068 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/crc-debug-h75rq" event={"ID":"bfac4aaa-4b46-487b-867d-d9068ddf68f4","Type":"ContainerDied","Data":"7205935def2af3857dcd04b3edeb6e2a61e3f58db7b7abaf5aec1ee8833560e6"} Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.523090 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/crc-debug-h75rq" event={"ID":"bfac4aaa-4b46-487b-867d-d9068ddf68f4","Type":"ContainerStarted","Data":"3bc0857d4c8620bd09c880f02aa02ba8c3855b4a37771487ddc213fd773a9eb2"} Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.557438 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qfqlr/crc-debug-h75rq"] Feb 16 15:43:04 crc kubenswrapper[4748]: I0216 15:43:04.570788 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qfqlr/crc-debug-h75rq"] Feb 16 15:43:05 crc kubenswrapper[4748]: I0216 15:43:05.658007 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:05 crc kubenswrapper[4748]: I0216 15:43:05.763001 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lf2j8\" (UniqueName: \"kubernetes.io/projected/bfac4aaa-4b46-487b-867d-d9068ddf68f4-kube-api-access-lf2j8\") pod \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " Feb 16 15:43:05 crc kubenswrapper[4748]: I0216 15:43:05.763110 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfac4aaa-4b46-487b-867d-d9068ddf68f4-host\") pod \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\" (UID: \"bfac4aaa-4b46-487b-867d-d9068ddf68f4\") " Feb 16 15:43:05 crc kubenswrapper[4748]: I0216 15:43:05.763472 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfac4aaa-4b46-487b-867d-d9068ddf68f4-host" (OuterVolumeSpecName: "host") pod "bfac4aaa-4b46-487b-867d-d9068ddf68f4" (UID: "bfac4aaa-4b46-487b-867d-d9068ddf68f4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 16 15:43:05 crc kubenswrapper[4748]: I0216 15:43:05.776904 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfac4aaa-4b46-487b-867d-d9068ddf68f4-kube-api-access-lf2j8" (OuterVolumeSpecName: "kube-api-access-lf2j8") pod "bfac4aaa-4b46-487b-867d-d9068ddf68f4" (UID: "bfac4aaa-4b46-487b-867d-d9068ddf68f4"). InnerVolumeSpecName "kube-api-access-lf2j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:43:05 crc kubenswrapper[4748]: I0216 15:43:05.865622 4748 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/bfac4aaa-4b46-487b-867d-d9068ddf68f4-host\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:05 crc kubenswrapper[4748]: I0216 15:43:05.865917 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lf2j8\" (UniqueName: \"kubernetes.io/projected/bfac4aaa-4b46-487b-867d-d9068ddf68f4-kube-api-access-lf2j8\") on node \"crc\" DevicePath \"\"" Feb 16 15:43:05 crc kubenswrapper[4748]: E0216 15:43:05.996333 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:43:06 crc kubenswrapper[4748]: I0216 15:43:06.539912 4748 scope.go:117] "RemoveContainer" containerID="7205935def2af3857dcd04b3edeb6e2a61e3f58db7b7abaf5aec1ee8833560e6" Feb 16 15:43:06 crc kubenswrapper[4748]: I0216 15:43:06.539970 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/crc-debug-h75rq" Feb 16 15:43:07 crc kubenswrapper[4748]: I0216 15:43:07.007407 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfac4aaa-4b46-487b-867d-d9068ddf68f4" path="/var/lib/kubelet/pods/bfac4aaa-4b46-487b-867d-d9068ddf68f4/volumes" Feb 16 15:43:19 crc kubenswrapper[4748]: E0216 15:43:19.998158 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:43:31 crc kubenswrapper[4748]: E0216 15:43:30.999900 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:43:34 crc kubenswrapper[4748]: I0216 15:43:34.730227 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:43:34 crc kubenswrapper[4748]: I0216 15:43:34.731255 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:43:45 crc kubenswrapper[4748]: E0216 15:43:45.013986 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:43:56 crc kubenswrapper[4748]: E0216 15:43:56.004691 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:43:58 crc kubenswrapper[4748]: I0216 15:43:58.848425 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c/init-config-reloader/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.014596 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c/alertmanager/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.020789 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c/config-reloader/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.023857 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_ec8e9d13-fbe3-43e2-80bb-e2056ce6c92c/init-config-reloader/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.165947 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79ffc5c478-xcg56_393346bc-972a-4a9f-847b-9bd0562093f7/barbican-api/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.222446 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-79ffc5c478-xcg56_393346bc-972a-4a9f-847b-9bd0562093f7/barbican-api-log/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.347458 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86d9658dfd-jm4bc_1e72587e-7f6f-433e-a493-41d33cb99182/barbican-keystone-listener/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.352580 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-86d9658dfd-jm4bc_1e72587e-7f6f-433e-a493-41d33cb99182/barbican-keystone-listener-log/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.478793 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-574599c959-m6zm8_7ee0b09b-784e-4ba5-bfb3-4067ec822943/barbican-worker/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.567074 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-574599c959-m6zm8_7ee0b09b-784e-4ba5-bfb3-4067ec822943/barbican-worker-log/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.639901 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_60df5608-339f-4262-8459-eb5359287bd8/ceilometer-central-agent/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.722750 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_60df5608-339f-4262-8459-eb5359287bd8/ceilometer-notification-agent/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.782022 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_60df5608-339f-4262-8459-eb5359287bd8/proxy-httpd/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.834878 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_60df5608-339f-4262-8459-eb5359287bd8/sg-core/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.924481 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0918b3f4-1fe6-4778-a6bb-ff623ae2bf57/cinder-api/0.log" Feb 16 15:43:59 crc kubenswrapper[4748]: I0216 15:43:59.979695 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_0918b3f4-1fe6-4778-a6bb-ff623ae2bf57/cinder-api-log/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.073919 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_99782bb5-c485-441e-9a7e-9225582d84bc/cinder-scheduler/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.137637 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_99782bb5-c485-441e-9a7e-9225582d84bc/probe/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.342614 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_42d40dab-aebb-44fa-ac5a-9100d1b1fb48/loki-compactor/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.558789 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-vwd9b_c624e6e8-c1e8-433c-ad0f-603109d8fa32/loki-distributor/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.638648 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-vxq6m_48850a43-a766-44bc-9426-b56c91be16d1/gateway/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.770619 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-wj7t8_eded30d6-cdfa-48c2-b298-28242bb952d1/gateway/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.856026 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_54cc1946-a258-47ba-9460-d27cae5b2b9f/loki-index-gateway/0.log" Feb 16 15:44:00 crc kubenswrapper[4748]: I0216 15:44:00.991815 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_ecd4cdcd-6dc0-4bba-980e-019d6eae5251/loki-ingester/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.218518 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-7lv6s_8748ce40-6f4e-417f-919b-5ce0b40ebf43/loki-querier/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.422588 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-klkqr_fcc01f6d-7536-43f6-bd86-a6eea7443783/loki-query-frontend/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.469571 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-4zqjr_9d8e8156-dfec-42df-bb06-3e31424d2642/init/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.602904 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-4zqjr_9d8e8156-dfec-42df-bb06-3e31424d2642/init/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.631867 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-89c5cd4d5-4zqjr_9d8e8156-dfec-42df-bb06-3e31424d2642/dnsmasq-dns/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.670231 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e42defba-7cb0-4599-bdcb-34df647a38ab/glance-httpd/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.801802 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e42defba-7cb0-4599-bdcb-34df647a38ab/glance-log/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.906928 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f5918057-501d-4f3a-8d34-759a39e28502/glance-log/0.log" Feb 16 15:44:01 crc kubenswrapper[4748]: I0216 15:44:01.913586 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_f5918057-501d-4f3a-8d34-759a39e28502/glance-httpd/0.log" Feb 16 15:44:02 crc kubenswrapper[4748]: I0216 15:44:02.109697 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-69b5b5cc64-hzmw7_8bbee52e-c08f-417f-a7e9-d7c055c695e7/keystone-api/0.log" Feb 16 15:44:02 crc kubenswrapper[4748]: I0216 15:44:02.150571 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e37e9a18-0d9c-4015-b877-06fc0dc9c908/kube-state-metrics/0.log" Feb 16 15:44:02 crc kubenswrapper[4748]: I0216 15:44:02.427066 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b9448587-thszr_4da3d24c-5be3-45a4-a282-bbbd33f0dad7/neutron-api/0.log" Feb 16 15:44:02 crc kubenswrapper[4748]: I0216 15:44:02.442399 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6b9448587-thszr_4da3d24c-5be3-45a4-a282-bbbd33f0dad7/neutron-httpd/0.log" Feb 16 15:44:02 crc kubenswrapper[4748]: I0216 15:44:02.781053 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee2a1e55-f629-46db-872b-db3f1baee84a/nova-api-log/0.log" Feb 16 15:44:02 crc kubenswrapper[4748]: I0216 15:44:02.868637 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_66d4f738-5922-4ccb-a771-33aeccf2264f/nova-cell0-conductor-conductor/0.log" Feb 16 15:44:02 crc kubenswrapper[4748]: I0216 15:44:02.874595 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_ee2a1e55-f629-46db-872b-db3f1baee84a/nova-api-api/0.log" Feb 16 15:44:03 crc kubenswrapper[4748]: I0216 15:44:03.094931 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5a6aae8e-3f43-4da4-99a0-6342ae62e9c1/nova-cell1-conductor-conductor/0.log" Feb 16 15:44:03 crc kubenswrapper[4748]: I0216 15:44:03.257608 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_5629100c-e11d-40b2-bb5a-d61200b4d405/nova-cell1-novncproxy-novncproxy/0.log" Feb 16 15:44:03 crc kubenswrapper[4748]: I0216 15:44:03.381660 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6490c748-242c-465e-a5ba-c44b9276c005/nova-metadata-log/0.log" Feb 16 15:44:03 crc kubenswrapper[4748]: I0216 15:44:03.667805 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_380f83b1-e55a-4bb1-ab5b-f25ad2f0f1f6/nova-scheduler-scheduler/0.log" Feb 16 15:44:03 crc kubenswrapper[4748]: I0216 15:44:03.705834 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ccc28d79-7cdc-4fac-95bb-2f041b1f25f1/mysql-bootstrap/0.log" Feb 16 15:44:03 crc kubenswrapper[4748]: I0216 15:44:03.888820 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ccc28d79-7cdc-4fac-95bb-2f041b1f25f1/mysql-bootstrap/0.log" Feb 16 15:44:03 crc kubenswrapper[4748]: I0216 15:44:03.941875 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ccc28d79-7cdc-4fac-95bb-2f041b1f25f1/galera/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.164043 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_6490c748-242c-465e-a5ba-c44b9276c005/nova-metadata-metadata/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.254504 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_712fd752-5464-47c2-851e-b5b54a2cf335/mysql-bootstrap/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.438205 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_7fc31461-7669-46dc-ab65-839d0dc6b753/openstackclient/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.450290 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_712fd752-5464-47c2-851e-b5b54a2cf335/galera/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.479132 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_712fd752-5464-47c2-851e-b5b54a2cf335/mysql-bootstrap/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.688310 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-8q6bn_5282d6ba-c0a4-4ada-9ffb-d233444b10f1/ovn-controller/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.709000 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6nflw_5d1f139d-79dd-4b6a-beb6-aa10fe0e91f2/openstack-network-exporter/0.log" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.730218 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.730265 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:44:04 crc kubenswrapper[4748]: I0216 15:44:04.894023 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rhspv_62617783-e02a-4d59-b7a1-36206106585b/ovsdb-server-init/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.133026 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rhspv_62617783-e02a-4d59-b7a1-36206106585b/ovsdb-server-init/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.136904 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rhspv_62617783-e02a-4d59-b7a1-36206106585b/ovsdb-server/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.148632 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rhspv_62617783-e02a-4d59-b7a1-36206106585b/ovs-vswitchd/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.375293 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_62eb48ad-9b6f-4da0-befd-f14a9e32e031/openstack-network-exporter/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.406014 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_62eb48ad-9b6f-4da0-befd-f14a9e32e031/ovn-northd/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.499469 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_252aec5a-72dd-4699-b9b8-72dc1c8bd1a8/openstack-network-exporter/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.579281 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_252aec5a-72dd-4699-b9b8-72dc1c8bd1a8/ovsdbserver-nb/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.762981 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e891de95-67f1-4cdd-8913-747978f44a1e/openstack-network-exporter/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.774626 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_e891de95-67f1-4cdd-8913-747978f44a1e/ovsdbserver-sb/0.log" Feb 16 15:44:05 crc kubenswrapper[4748]: I0216 15:44:05.847152 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b69f6f9cb-8v6bm_fd67ae0f-8630-4868-8f11-1dd56d66d7a5/placement-api/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.015533 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3bc07b5c-b3d3-4ba1-b580-30e09261edab/init-config-reloader/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.083480 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6b69f6f9cb-8v6bm_fd67ae0f-8630-4868-8f11-1dd56d66d7a5/placement-log/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.250521 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3bc07b5c-b3d3-4ba1-b580-30e09261edab/config-reloader/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.275674 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3bc07b5c-b3d3-4ba1-b580-30e09261edab/init-config-reloader/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.295094 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3bc07b5c-b3d3-4ba1-b580-30e09261edab/thanos-sidecar/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.299442 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3bc07b5c-b3d3-4ba1-b580-30e09261edab/prometheus/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.438600 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b081f805-b462-406b-9d37-5aef68dd9edc/setup-container/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.703774 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b081f805-b462-406b-9d37-5aef68dd9edc/rabbitmq/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.726206 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da81a287-d981-4b30-8d23-70cbc085368e/setup-container/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.733064 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_b081f805-b462-406b-9d37-5aef68dd9edc/setup-container/0.log" Feb 16 15:44:06 crc kubenswrapper[4748]: I0216 15:44:06.983568 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da81a287-d981-4b30-8d23-70cbc085368e/setup-container/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.000547 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_da81a287-d981-4b30-8d23-70cbc085368e/rabbitmq/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.200182 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-588bd888d5-jbdss_395a5c55-9892-4842-bf7b-ba42077818d3/proxy-httpd/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.392553 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-588bd888d5-jbdss_395a5c55-9892-4842-bf7b-ba42077818d3/proxy-server/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.457114 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-tpchh_87853597-3b96-46e9-803b-ce992b010f0b/swift-ring-rebalance/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.596055 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/account-auditor/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.606507 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/account-reaper/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.750001 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/account-replicator/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.834242 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/account-server/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.848445 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/container-auditor/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.883255 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/container-replicator/0.log" Feb 16 15:44:07 crc kubenswrapper[4748]: I0216 15:44:07.953411 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/container-server/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.068965 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/object-expirer/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.113872 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/container-updater/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.114078 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/object-auditor/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.156152 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/object-replicator/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.246321 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/object-server/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.284812 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/rsync/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.368020 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/object-updater/0.log" Feb 16 15:44:08 crc kubenswrapper[4748]: I0216 15:44:08.370927 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3c460294-3cc7-4770-9a8a-0bd7c2b8fad2/swift-recon-cron/0.log" Feb 16 15:44:09 crc kubenswrapper[4748]: E0216 15:44:09.004447 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:44:10 crc kubenswrapper[4748]: I0216 15:44:10.647379 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_23bef88b-c878-46e0-960b-f77594421c27/memcached/0.log" Feb 16 15:44:23 crc kubenswrapper[4748]: E0216 15:44:23.997012 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:44:33 crc kubenswrapper[4748]: I0216 15:44:33.966240 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9_2927c174-6e02-4529-802c-3bf02d82855f/util/0.log" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.225164 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9_2927c174-6e02-4529-802c-3bf02d82855f/pull/0.log" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.230759 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9_2927c174-6e02-4529-802c-3bf02d82855f/util/0.log" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.249095 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9_2927c174-6e02-4529-802c-3bf02d82855f/pull/0.log" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.460297 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9_2927c174-6e02-4529-802c-3bf02d82855f/util/0.log" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.484488 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9_2927c174-6e02-4529-802c-3bf02d82855f/pull/0.log" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.498482 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_02aaacdbb2cdc34212ef0d4f992a08d2443727e2a4312d7c57a1078608v2qp9_2927c174-6e02-4529-802c-3bf02d82855f/extract/0.log" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.728881 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.728951 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.728997 4748 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.731657 4748 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b"} pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.731802 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" containerID="cri-o://508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" gracePeriod=600 Feb 16 15:44:34 crc kubenswrapper[4748]: E0216 15:44:34.855700 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:44:34 crc kubenswrapper[4748]: I0216 15:44:34.997348 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d8bf5c495-64bzs_d7e9b369-11d8-4aa9-a3b2-db6b88904b51/manager/0.log" Feb 16 15:44:35 crc kubenswrapper[4748]: I0216 15:44:35.386601 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987464f4-7vgh9_69a86f03-7f6c-48b7-bc6f-c6c432f735ce/manager/0.log" Feb 16 15:44:35 crc kubenswrapper[4748]: I0216 15:44:35.396916 4748 generic.go:334] "Generic (PLEG): container finished" podID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" exitCode=0 Feb 16 15:44:35 crc kubenswrapper[4748]: I0216 15:44:35.396960 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerDied","Data":"508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b"} Feb 16 15:44:35 crc kubenswrapper[4748]: I0216 15:44:35.396990 4748 scope.go:117] "RemoveContainer" containerID="dcc28193efc042b0cad62ac5964fcb9ef5a63df6cd09713732e0efa9098a5d5a" Feb 16 15:44:35 crc kubenswrapper[4748]: I0216 15:44:35.398244 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:44:35 crc kubenswrapper[4748]: E0216 15:44:35.398558 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:44:35 crc kubenswrapper[4748]: I0216 15:44:35.581891 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69f49c598c-xf72s_8bdd4b0f-f5b2-4ac5-8221-8b7ff7264325/manager/0.log" Feb 16 15:44:35 crc kubenswrapper[4748]: I0216 15:44:35.947252 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5b9b8895d5-2lgdc_6c462cae-e6f6-4551-a63f-783b5355050d/manager/0.log" Feb 16 15:44:36 crc kubenswrapper[4748]: I0216 15:44:36.067583 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-5d946d989d-rmvdg_c8282c68-cc06-4252-be3f-12fd375413d5/manager/0.log" Feb 16 15:44:36 crc kubenswrapper[4748]: I0216 15:44:36.336356 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-554564d7fc-jvjnc_6b8c8de2-3f25-4adb-9598-3beceb5aab8f/manager/0.log" Feb 16 15:44:36 crc kubenswrapper[4748]: I0216 15:44:36.396053 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79d975b745-szbcv_47a580d2-e511-4827-bc01-91189c1e34e9/manager/0.log" Feb 16 15:44:36 crc kubenswrapper[4748]: I0216 15:44:36.615861 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b4d948c87-ss7f5_08a4c7e1-1e32-4f6e-8fdc-d622dbe06059/manager/0.log" Feb 16 15:44:36 crc kubenswrapper[4748]: I0216 15:44:36.732075 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-54f6768c69-tlr6k_aadfc2ec-ea6d-440c-9c0d-d5005e39230c/manager/0.log" Feb 16 15:44:36 crc kubenswrapper[4748]: I0216 15:44:36.852990 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6994f66f48-594mx_f39955d7-4055-4a9d-8c21-eafa5ddd3f7f/manager/0.log" Feb 16 15:44:36 crc kubenswrapper[4748]: E0216 15:44:36.998564 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:44:37 crc kubenswrapper[4748]: I0216 15:44:37.171357 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64ddbf8bb-z478p_2e85817e-216a-4784-880a-f433c52032af/manager/0.log" Feb 16 15:44:37 crc kubenswrapper[4748]: I0216 15:44:37.451301 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-567668f5cf-fk48d_e7c3328c-8c35-4dab-8082-d7ee6d6c53f5/manager/0.log" Feb 16 15:44:37 crc kubenswrapper[4748]: I0216 15:44:37.684619 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cmsl4n_58388a3c-6479-40b7-a5cb-4d83fc2a38b3/manager/0.log" Feb 16 15:44:38 crc kubenswrapper[4748]: I0216 15:44:38.121235 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-787c798d66-8b6lx_60c7e70b-728c-4a23-9bdf-801548ee7c98/operator/0.log" Feb 16 15:44:38 crc kubenswrapper[4748]: I0216 15:44:38.369362 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-msk6t_391e6019-7bc8-4e9d-bfff-c5be8b646c53/registry-server/0.log" Feb 16 15:44:38 crc kubenswrapper[4748]: I0216 15:44:38.633207 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-d44cf6b75-wjnt6_77cc0c29-605d-46d3-98a8-f9aeecbe888b/manager/0.log" Feb 16 15:44:38 crc kubenswrapper[4748]: I0216 15:44:38.842670 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-8497b45c89-bzrm7_5e9f0b4c-6645-4cc6-ad91-043721d84e74/manager/0.log" Feb 16 15:44:39 crc kubenswrapper[4748]: I0216 15:44:39.016426 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-69f8888797-99tg5_a6eb6394-3349-4a90-bf7a-6677191f0c5a/manager/0.log" Feb 16 15:44:39 crc kubenswrapper[4748]: I0216 15:44:39.075691 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-zw2pr_585bdba9-5fef-469b-a5a2-8b4a15719360/operator/0.log" Feb 16 15:44:39 crc kubenswrapper[4748]: I0216 15:44:39.213688 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5b45b684f5-xhrt2_b0e6e37b-e4a2-4013-9678-7412c55e0fd0/manager/0.log" Feb 16 15:44:39 crc kubenswrapper[4748]: I0216 15:44:39.497075 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68f46476f-q95f2_ada66d46-4901-45cc-9b08-a3578fadfda0/manager/0.log" Feb 16 15:44:39 crc kubenswrapper[4748]: I0216 15:44:39.672309 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7866795846-pktjs_26e5a91e-b0ec-44ff-bcb7-edebf76310ce/manager/0.log" Feb 16 15:44:39 crc kubenswrapper[4748]: I0216 15:44:39.862630 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5db88f68c-8g4n4_240e408e-e2ef-4375-a604-f5b29fc5bdfc/manager/0.log" Feb 16 15:44:39 crc kubenswrapper[4748]: I0216 15:44:39.999707 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-6ccb9b958b-7h287_5a824236-f0aa-4b02-8357-0c8275fa6509/manager/0.log" Feb 16 15:44:41 crc kubenswrapper[4748]: I0216 15:44:41.592332 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-868647ff47-zlwvz_383f552e-0d7a-4c2e-8931-1e0605d309e2/manager/0.log" Feb 16 15:44:49 crc kubenswrapper[4748]: I0216 15:44:49.995063 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:44:49 crc kubenswrapper[4748]: E0216 15:44:49.996082 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:44:51 crc kubenswrapper[4748]: I0216 15:44:51.997756 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:44:52 crc kubenswrapper[4748]: E0216 15:44:52.125598 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:44:52 crc kubenswrapper[4748]: E0216 15:44:52.125941 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:44:52 crc kubenswrapper[4748]: E0216 15:44:52.126070 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:44:52 crc kubenswrapper[4748]: E0216 15:44:52.127298 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.167505 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk"] Feb 16 15:45:00 crc kubenswrapper[4748]: E0216 15:45:00.168519 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfac4aaa-4b46-487b-867d-d9068ddf68f4" containerName="container-00" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.168536 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfac4aaa-4b46-487b-867d-d9068ddf68f4" containerName="container-00" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.168824 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfac4aaa-4b46-487b-867d-d9068ddf68f4" containerName="container-00" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.169710 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.171647 4748 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.172946 4748 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.180651 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk"] Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.272829 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42tjj\" (UniqueName: \"kubernetes.io/projected/24944b75-6c19-49b0-82d2-10a07ad2e585-kube-api-access-42tjj\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.272963 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24944b75-6c19-49b0-82d2-10a07ad2e585-secret-volume\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.273091 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24944b75-6c19-49b0-82d2-10a07ad2e585-config-volume\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.375246 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24944b75-6c19-49b0-82d2-10a07ad2e585-config-volume\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.375351 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42tjj\" (UniqueName: \"kubernetes.io/projected/24944b75-6c19-49b0-82d2-10a07ad2e585-kube-api-access-42tjj\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.375394 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24944b75-6c19-49b0-82d2-10a07ad2e585-secret-volume\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.376846 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24944b75-6c19-49b0-82d2-10a07ad2e585-config-volume\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.381384 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24944b75-6c19-49b0-82d2-10a07ad2e585-secret-volume\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.405284 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42tjj\" (UniqueName: \"kubernetes.io/projected/24944b75-6c19-49b0-82d2-10a07ad2e585-kube-api-access-42tjj\") pod \"collect-profiles-29520945-tlbgk\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.501600 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:00 crc kubenswrapper[4748]: I0216 15:45:00.989544 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk"] Feb 16 15:45:01 crc kubenswrapper[4748]: I0216 15:45:01.670293 4748 generic.go:334] "Generic (PLEG): container finished" podID="24944b75-6c19-49b0-82d2-10a07ad2e585" containerID="81379f2fb7b18f5f48d983068ca9d0fe1d7fe011c73307026ae25b6975fb9896" exitCode=0 Feb 16 15:45:01 crc kubenswrapper[4748]: I0216 15:45:01.670353 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" event={"ID":"24944b75-6c19-49b0-82d2-10a07ad2e585","Type":"ContainerDied","Data":"81379f2fb7b18f5f48d983068ca9d0fe1d7fe011c73307026ae25b6975fb9896"} Feb 16 15:45:01 crc kubenswrapper[4748]: I0216 15:45:01.670591 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" event={"ID":"24944b75-6c19-49b0-82d2-10a07ad2e585","Type":"ContainerStarted","Data":"a71e3549ab9ae393e8022e0e2027d92a8da29ad0d247662edaf1408509f49852"} Feb 16 15:45:02 crc kubenswrapper[4748]: I0216 15:45:02.109033 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-t29rq_d5c74e42-6760-4bae-9ef6-0d4d4c1b5c44/control-plane-machine-set-operator/0.log" Feb 16 15:45:02 crc kubenswrapper[4748]: I0216 15:45:02.280200 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z47nh_263bd0a0-5043-48f7-a185-30ef874fc6e7/machine-api-operator/0.log" Feb 16 15:45:02 crc kubenswrapper[4748]: I0216 15:45:02.302093 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-z47nh_263bd0a0-5043-48f7-a185-30ef874fc6e7/kube-rbac-proxy/0.log" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.087989 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.132171 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42tjj\" (UniqueName: \"kubernetes.io/projected/24944b75-6c19-49b0-82d2-10a07ad2e585-kube-api-access-42tjj\") pod \"24944b75-6c19-49b0-82d2-10a07ad2e585\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.132238 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24944b75-6c19-49b0-82d2-10a07ad2e585-config-volume\") pod \"24944b75-6c19-49b0-82d2-10a07ad2e585\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.132502 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24944b75-6c19-49b0-82d2-10a07ad2e585-secret-volume\") pod \"24944b75-6c19-49b0-82d2-10a07ad2e585\" (UID: \"24944b75-6c19-49b0-82d2-10a07ad2e585\") " Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.133149 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24944b75-6c19-49b0-82d2-10a07ad2e585-config-volume" (OuterVolumeSpecName: "config-volume") pod "24944b75-6c19-49b0-82d2-10a07ad2e585" (UID: "24944b75-6c19-49b0-82d2-10a07ad2e585"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.133627 4748 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24944b75-6c19-49b0-82d2-10a07ad2e585-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.140782 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24944b75-6c19-49b0-82d2-10a07ad2e585-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "24944b75-6c19-49b0-82d2-10a07ad2e585" (UID: "24944b75-6c19-49b0-82d2-10a07ad2e585"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.140987 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24944b75-6c19-49b0-82d2-10a07ad2e585-kube-api-access-42tjj" (OuterVolumeSpecName: "kube-api-access-42tjj") pod "24944b75-6c19-49b0-82d2-10a07ad2e585" (UID: "24944b75-6c19-49b0-82d2-10a07ad2e585"). InnerVolumeSpecName "kube-api-access-42tjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.235880 4748 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/24944b75-6c19-49b0-82d2-10a07ad2e585-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.235923 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42tjj\" (UniqueName: \"kubernetes.io/projected/24944b75-6c19-49b0-82d2-10a07ad2e585-kube-api-access-42tjj\") on node \"crc\" DevicePath \"\"" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.690585 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" event={"ID":"24944b75-6c19-49b0-82d2-10a07ad2e585","Type":"ContainerDied","Data":"a71e3549ab9ae393e8022e0e2027d92a8da29ad0d247662edaf1408509f49852"} Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.690636 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a71e3549ab9ae393e8022e0e2027d92a8da29ad0d247662edaf1408509f49852" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.690702 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520945-tlbgk" Feb 16 15:45:03 crc kubenswrapper[4748]: I0216 15:45:03.995210 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:45:03 crc kubenswrapper[4748]: E0216 15:45:03.995627 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:45:04 crc kubenswrapper[4748]: I0216 15:45:04.178917 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8"] Feb 16 15:45:04 crc kubenswrapper[4748]: I0216 15:45:04.189890 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520900-x7gz8"] Feb 16 15:45:05 crc kubenswrapper[4748]: I0216 15:45:05.010591 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb70c99c-1f99-4c49-86da-0b2102f52ea3" path="/var/lib/kubelet/pods/eb70c99c-1f99-4c49-86da-0b2102f52ea3/volumes" Feb 16 15:45:05 crc kubenswrapper[4748]: E0216 15:45:05.995787 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:45:15 crc kubenswrapper[4748]: I0216 15:45:14.999957 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:45:15 crc kubenswrapper[4748]: E0216 15:45:15.000782 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:45:15 crc kubenswrapper[4748]: I0216 15:45:15.961264 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-rgv2z_2a31ae96-7bd2-4b03-b7a4-5a9065c73b2d/cert-manager-controller/0.log" Feb 16 15:45:16 crc kubenswrapper[4748]: I0216 15:45:16.198602 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-v7pmk_6f321d96-89f7-4e8e-986d-0c1e236a48f3/cert-manager-cainjector/0.log" Feb 16 15:45:16 crc kubenswrapper[4748]: I0216 15:45:16.226159 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-98mzg_dac776e8-29fc-42fc-ad44-d4f0f16b8ef4/cert-manager-webhook/0.log" Feb 16 15:45:17 crc kubenswrapper[4748]: E0216 15:45:17.996224 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:45:22 crc kubenswrapper[4748]: I0216 15:45:22.031859 4748 scope.go:117] "RemoveContainer" containerID="6cb6888f97f2d383da1c31f1d03810f068c44e35e47c021e5ffecadc892a82b4" Feb 16 15:45:25 crc kubenswrapper[4748]: I0216 15:45:25.995633 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:45:25 crc kubenswrapper[4748]: E0216 15:45:25.998690 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:45:29 crc kubenswrapper[4748]: E0216 15:45:29.000394 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:45:31 crc kubenswrapper[4748]: I0216 15:45:31.381636 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5c78fc5d65-2j5kv_7d0947f4-e6d9-46c9-b7d1-2dc2c788d855/nmstate-console-plugin/0.log" Feb 16 15:45:31 crc kubenswrapper[4748]: I0216 15:45:31.556598 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-lbl2g_b22e9707-ef85-4f3f-81aa-bd2a419a4a28/nmstate-handler/0.log" Feb 16 15:45:31 crc kubenswrapper[4748]: I0216 15:45:31.633487 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-bh57n_ba9f32c6-5016-44c6-a2e9-1f8424f92e0a/kube-rbac-proxy/0.log" Feb 16 15:45:31 crc kubenswrapper[4748]: I0216 15:45:31.665301 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58c85c668d-bh57n_ba9f32c6-5016-44c6-a2e9-1f8424f92e0a/nmstate-metrics/0.log" Feb 16 15:45:31 crc kubenswrapper[4748]: I0216 15:45:31.794909 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-694c9596b7-bbfnc_120dfea6-1405-433f-bf1f-11903bc821e8/nmstate-operator/0.log" Feb 16 15:45:31 crc kubenswrapper[4748]: I0216 15:45:31.872861 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-866bcb46dc-cll59_65a2e9f8-c446-48be-a887-4da74a413a77/nmstate-webhook/0.log" Feb 16 15:45:38 crc kubenswrapper[4748]: I0216 15:45:38.997107 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:45:38 crc kubenswrapper[4748]: E0216 15:45:38.997702 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:45:41 crc kubenswrapper[4748]: E0216 15:45:41.997269 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:45:45 crc kubenswrapper[4748]: I0216 15:45:45.798670 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-668f94f855-4cdp6_0ef4556c-a65b-4be7-9b8e-36f4423e84b1/kube-rbac-proxy/0.log" Feb 16 15:45:45 crc kubenswrapper[4748]: I0216 15:45:45.854341 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-668f94f855-4cdp6_0ef4556c-a65b-4be7-9b8e-36f4423e84b1/manager/0.log" Feb 16 15:45:51 crc kubenswrapper[4748]: I0216 15:45:51.994604 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:45:51 crc kubenswrapper[4748]: E0216 15:45:51.995683 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.166584 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4hqk5"] Feb 16 15:45:55 crc kubenswrapper[4748]: E0216 15:45:55.167828 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24944b75-6c19-49b0-82d2-10a07ad2e585" containerName="collect-profiles" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.167872 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="24944b75-6c19-49b0-82d2-10a07ad2e585" containerName="collect-profiles" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.168222 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="24944b75-6c19-49b0-82d2-10a07ad2e585" containerName="collect-profiles" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.170987 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.222924 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hqk5"] Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.301358 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-catalog-content\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.301858 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxtpp\" (UniqueName: \"kubernetes.io/projected/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-kube-api-access-kxtpp\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.301968 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-utilities\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.404228 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-catalog-content\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.404357 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxtpp\" (UniqueName: \"kubernetes.io/projected/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-kube-api-access-kxtpp\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.404386 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-utilities\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.404854 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-catalog-content\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.404866 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-utilities\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.422512 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxtpp\" (UniqueName: \"kubernetes.io/projected/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-kube-api-access-kxtpp\") pod \"redhat-marketplace-4hqk5\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: I0216 15:45:55.517250 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:45:55 crc kubenswrapper[4748]: E0216 15:45:55.996608 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:45:56 crc kubenswrapper[4748]: I0216 15:45:56.192642 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hqk5"] Feb 16 15:45:56 crc kubenswrapper[4748]: I0216 15:45:56.223495 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hqk5" event={"ID":"e7fddec0-5f36-4ac6-b2e2-023a16992cd1","Type":"ContainerStarted","Data":"52adadd95530aef2192fbb7299376eba332e75d2cca36cac4eb17fa7a09f74f1"} Feb 16 15:45:57 crc kubenswrapper[4748]: I0216 15:45:57.240521 4748 generic.go:334] "Generic (PLEG): container finished" podID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerID="0fdb72b0eee175cc6a4c3c4969555eca29e6615b7a561fe91eb0f85c465fc9de" exitCode=0 Feb 16 15:45:57 crc kubenswrapper[4748]: I0216 15:45:57.240608 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hqk5" event={"ID":"e7fddec0-5f36-4ac6-b2e2-023a16992cd1","Type":"ContainerDied","Data":"0fdb72b0eee175cc6a4c3c4969555eca29e6615b7a561fe91eb0f85c465fc9de"} Feb 16 15:45:58 crc kubenswrapper[4748]: I0216 15:45:58.258530 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hqk5" event={"ID":"e7fddec0-5f36-4ac6-b2e2-023a16992cd1","Type":"ContainerStarted","Data":"de3d1fa664c539c29ea3984a16ffdb7d1421d8de37c8c793cf08b90803bcde0f"} Feb 16 15:45:59 crc kubenswrapper[4748]: I0216 15:45:59.269148 4748 generic.go:334] "Generic (PLEG): container finished" podID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerID="de3d1fa664c539c29ea3984a16ffdb7d1421d8de37c8c793cf08b90803bcde0f" exitCode=0 Feb 16 15:45:59 crc kubenswrapper[4748]: I0216 15:45:59.269248 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hqk5" event={"ID":"e7fddec0-5f36-4ac6-b2e2-023a16992cd1","Type":"ContainerDied","Data":"de3d1fa664c539c29ea3984a16ffdb7d1421d8de37c8c793cf08b90803bcde0f"} Feb 16 15:45:59 crc kubenswrapper[4748]: I0216 15:45:59.484031 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6scp6_f16a6a7b-f861-465e-bfc7-6c94642de504/prometheus-operator/0.log" Feb 16 15:45:59 crc kubenswrapper[4748]: I0216 15:45:59.638273 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7_a879c65c-71d1-4772-b4cc-6d30cbc5210f/prometheus-operator-admission-webhook/0.log" Feb 16 15:45:59 crc kubenswrapper[4748]: I0216 15:45:59.713705 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt_e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c/prometheus-operator-admission-webhook/0.log" Feb 16 15:45:59 crc kubenswrapper[4748]: I0216 15:45:59.845072 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6x2nh_9060b85c-a31e-4caa-9552-e2d2a4e7cba5/operator/0.log" Feb 16 15:45:59 crc kubenswrapper[4748]: I0216 15:45:59.948691 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9vb7j_c3019993-87d0-4427-a928-fa01e0a0f419/perses-operator/0.log" Feb 16 15:46:00 crc kubenswrapper[4748]: I0216 15:46:00.278566 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hqk5" event={"ID":"e7fddec0-5f36-4ac6-b2e2-023a16992cd1","Type":"ContainerStarted","Data":"655c4ff4bbcc33ae0ff93f62664e3fb587ae50eb0727f0d57af609403f6d9fbe"} Feb 16 15:46:00 crc kubenswrapper[4748]: I0216 15:46:00.302846 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4hqk5" podStartSLOduration=2.913812776 podStartE2EDuration="5.302826615s" podCreationTimestamp="2026-02-16 15:45:55 +0000 UTC" firstStartedPulling="2026-02-16 15:45:57.244547691 +0000 UTC m=+3182.936216780" lastFinishedPulling="2026-02-16 15:45:59.63356158 +0000 UTC m=+3185.325230619" observedRunningTime="2026-02-16 15:46:00.295077696 +0000 UTC m=+3185.986746735" watchObservedRunningTime="2026-02-16 15:46:00.302826615 +0000 UTC m=+3185.994495654" Feb 16 15:46:05 crc kubenswrapper[4748]: I0216 15:46:05.517446 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:46:05 crc kubenswrapper[4748]: I0216 15:46:05.518181 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:46:05 crc kubenswrapper[4748]: I0216 15:46:05.596462 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:46:05 crc kubenswrapper[4748]: I0216 15:46:05.996947 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:46:05 crc kubenswrapper[4748]: E0216 15:46:05.998236 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:46:06 crc kubenswrapper[4748]: I0216 15:46:06.423369 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:46:08 crc kubenswrapper[4748]: I0216 15:46:08.946100 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hqk5"] Feb 16 15:46:08 crc kubenswrapper[4748]: I0216 15:46:08.947914 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4hqk5" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="registry-server" containerID="cri-o://655c4ff4bbcc33ae0ff93f62664e3fb587ae50eb0727f0d57af609403f6d9fbe" gracePeriod=2 Feb 16 15:46:08 crc kubenswrapper[4748]: E0216 15:46:08.999454 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:46:09 crc kubenswrapper[4748]: I0216 15:46:09.600502 4748 generic.go:334] "Generic (PLEG): container finished" podID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerID="655c4ff4bbcc33ae0ff93f62664e3fb587ae50eb0727f0d57af609403f6d9fbe" exitCode=0 Feb 16 15:46:09 crc kubenswrapper[4748]: I0216 15:46:09.600632 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hqk5" event={"ID":"e7fddec0-5f36-4ac6-b2e2-023a16992cd1","Type":"ContainerDied","Data":"655c4ff4bbcc33ae0ff93f62664e3fb587ae50eb0727f0d57af609403f6d9fbe"} Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.026343 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.184431 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-utilities\") pod \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.184541 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxtpp\" (UniqueName: \"kubernetes.io/projected/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-kube-api-access-kxtpp\") pod \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.184692 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-catalog-content\") pod \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\" (UID: \"e7fddec0-5f36-4ac6-b2e2-023a16992cd1\") " Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.185538 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-utilities" (OuterVolumeSpecName: "utilities") pod "e7fddec0-5f36-4ac6-b2e2-023a16992cd1" (UID: "e7fddec0-5f36-4ac6-b2e2-023a16992cd1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.190986 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-kube-api-access-kxtpp" (OuterVolumeSpecName: "kube-api-access-kxtpp") pod "e7fddec0-5f36-4ac6-b2e2-023a16992cd1" (UID: "e7fddec0-5f36-4ac6-b2e2-023a16992cd1"). InnerVolumeSpecName "kube-api-access-kxtpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.197176 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.197243 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxtpp\" (UniqueName: \"kubernetes.io/projected/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-kube-api-access-kxtpp\") on node \"crc\" DevicePath \"\"" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.227812 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7fddec0-5f36-4ac6-b2e2-023a16992cd1" (UID: "e7fddec0-5f36-4ac6-b2e2-023a16992cd1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.299921 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7fddec0-5f36-4ac6-b2e2-023a16992cd1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.618620 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4hqk5" event={"ID":"e7fddec0-5f36-4ac6-b2e2-023a16992cd1","Type":"ContainerDied","Data":"52adadd95530aef2192fbb7299376eba332e75d2cca36cac4eb17fa7a09f74f1"} Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.618668 4748 scope.go:117] "RemoveContainer" containerID="655c4ff4bbcc33ae0ff93f62664e3fb587ae50eb0727f0d57af609403f6d9fbe" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.618904 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4hqk5" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.659195 4748 scope.go:117] "RemoveContainer" containerID="de3d1fa664c539c29ea3984a16ffdb7d1421d8de37c8c793cf08b90803bcde0f" Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.674506 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hqk5"] Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.684868 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4hqk5"] Feb 16 15:46:10 crc kubenswrapper[4748]: I0216 15:46:10.699468 4748 scope.go:117] "RemoveContainer" containerID="0fdb72b0eee175cc6a4c3c4969555eca29e6615b7a561fe91eb0f85c465fc9de" Feb 16 15:46:11 crc kubenswrapper[4748]: I0216 15:46:11.006198 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" path="/var/lib/kubelet/pods/e7fddec0-5f36-4ac6-b2e2-023a16992cd1/volumes" Feb 16 15:46:14 crc kubenswrapper[4748]: I0216 15:46:14.529883 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-djpzh_28714bd3-7a7d-449e-a2c7-281461dabdb7/kube-rbac-proxy/0.log" Feb 16 15:46:14 crc kubenswrapper[4748]: I0216 15:46:14.535498 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-69bbfbf88f-djpzh_28714bd3-7a7d-449e-a2c7-281461dabdb7/controller/0.log" Feb 16 15:46:14 crc kubenswrapper[4748]: I0216 15:46:14.726120 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-frr-files/0.log" Feb 16 15:46:14 crc kubenswrapper[4748]: I0216 15:46:14.909591 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-frr-files/0.log" Feb 16 15:46:14 crc kubenswrapper[4748]: I0216 15:46:14.919275 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-metrics/0.log" Feb 16 15:46:14 crc kubenswrapper[4748]: I0216 15:46:14.929982 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-reloader/0.log" Feb 16 15:46:14 crc kubenswrapper[4748]: I0216 15:46:14.952026 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-reloader/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.136577 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-frr-files/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.144187 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-metrics/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.144731 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-reloader/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.150564 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-metrics/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.331176 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-reloader/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.357256 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-metrics/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.366934 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/cp-frr-files/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.397223 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/controller/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.526825 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/frr-metrics/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.575681 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/kube-rbac-proxy/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.599036 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/kube-rbac-proxy-frr/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.729208 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/reloader/0.log" Feb 16 15:46:15 crc kubenswrapper[4748]: I0216 15:46:15.884338 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-78b44bf5bb-vb4cq_645081e9-4d4b-4ec4-aca0-0d65484e18fc/frr-k8s-webhook-server/0.log" Feb 16 15:46:16 crc kubenswrapper[4748]: I0216 15:46:16.084207 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5685848bcc-n64g2_a1e7c11c-e2d3-4941-b9ce-15b587b46798/manager/0.log" Feb 16 15:46:16 crc kubenswrapper[4748]: I0216 15:46:16.161188 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-b4f87669f-wmdxb_b3037912-4b1a-4bca-978a-eb9e28269c5e/webhook-server/0.log" Feb 16 15:46:16 crc kubenswrapper[4748]: I0216 15:46:16.361836 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7mpk_d63d4158-b599-4058-a9c5-e31d1125c0bc/kube-rbac-proxy/0.log" Feb 16 15:46:16 crc kubenswrapper[4748]: I0216 15:46:16.408263 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-t6n42_0248528a-4dfd-4dcd-ab5d-e99c2f989f81/frr/0.log" Feb 16 15:46:16 crc kubenswrapper[4748]: I0216 15:46:16.727686 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g7mpk_d63d4158-b599-4058-a9c5-e31d1125c0bc/speaker/0.log" Feb 16 15:46:16 crc kubenswrapper[4748]: I0216 15:46:16.994071 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:46:16 crc kubenswrapper[4748]: E0216 15:46:16.994509 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:46:22 crc kubenswrapper[4748]: E0216 15:46:22.996657 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:46:29 crc kubenswrapper[4748]: I0216 15:46:29.994828 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:46:29 crc kubenswrapper[4748]: E0216 15:46:29.995584 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.131067 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw_bb0800a2-3982-4128-adfe-ac7e8700e11d/util/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.300162 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw_bb0800a2-3982-4128-adfe-ac7e8700e11d/util/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.301434 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw_bb0800a2-3982-4128-adfe-ac7e8700e11d/pull/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.305489 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw_bb0800a2-3982-4128-adfe-ac7e8700e11d/pull/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.475377 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw_bb0800a2-3982-4128-adfe-ac7e8700e11d/util/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.479534 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw_bb0800a2-3982-4128-adfe-ac7e8700e11d/pull/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.518947 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651r8jnw_bb0800a2-3982-4128-adfe-ac7e8700e11d/extract/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.685415 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc_ea460b26-21e3-40f4-a7bb-377fbc91eb7c/util/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.828859 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc_ea460b26-21e3-40f4-a7bb-377fbc91eb7c/util/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.838163 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc_ea460b26-21e3-40f4-a7bb-377fbc91eb7c/pull/0.log" Feb 16 15:46:30 crc kubenswrapper[4748]: I0216 15:46:30.896971 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc_ea460b26-21e3-40f4-a7bb-377fbc91eb7c/pull/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.031555 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc_ea460b26-21e3-40f4-a7bb-377fbc91eb7c/pull/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.036357 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc_ea460b26-21e3-40f4-a7bb-377fbc91eb7c/util/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.055670 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08z85tc_ea460b26-21e3-40f4-a7bb-377fbc91eb7c/extract/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.199482 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt_d15f1018-7687-4413-a41e-cca3126fa988/util/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.392883 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt_d15f1018-7687-4413-a41e-cca3126fa988/util/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.400129 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt_d15f1018-7687-4413-a41e-cca3126fa988/pull/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.454855 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt_d15f1018-7687-4413-a41e-cca3126fa988/pull/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.634418 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt_d15f1018-7687-4413-a41e-cca3126fa988/extract/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.638320 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt_d15f1018-7687-4413-a41e-cca3126fa988/util/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.663120 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_a9b3ed1fe9273b725119dcfb777257f08e39bbefccdf592dce2d0dc213d7hwt_d15f1018-7687-4413-a41e-cca3126fa988/pull/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.770614 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5dp2_dfb12c0d-7fe4-443b-84ef-a50362156745/extract-utilities/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.963339 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5dp2_dfb12c0d-7fe4-443b-84ef-a50362156745/extract-content/0.log" Feb 16 15:46:31 crc kubenswrapper[4748]: I0216 15:46:31.981076 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5dp2_dfb12c0d-7fe4-443b-84ef-a50362156745/extract-utilities/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.020853 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5dp2_dfb12c0d-7fe4-443b-84ef-a50362156745/extract-content/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.159026 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5dp2_dfb12c0d-7fe4-443b-84ef-a50362156745/extract-content/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.165581 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5dp2_dfb12c0d-7fe4-443b-84ef-a50362156745/extract-utilities/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.341079 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wk4zf_be0ce93d-b322-42ac-b2c1-798c2155c41d/extract-utilities/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.552031 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wk4zf_be0ce93d-b322-42ac-b2c1-798c2155c41d/extract-content/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.582576 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wk4zf_be0ce93d-b322-42ac-b2c1-798c2155c41d/extract-utilities/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.639033 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-s5dp2_dfb12c0d-7fe4-443b-84ef-a50362156745/registry-server/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.652953 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wk4zf_be0ce93d-b322-42ac-b2c1-798c2155c41d/extract-content/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.805170 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wk4zf_be0ce93d-b322-42ac-b2c1-798c2155c41d/extract-utilities/0.log" Feb 16 15:46:32 crc kubenswrapper[4748]: I0216 15:46:32.864275 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wk4zf_be0ce93d-b322-42ac-b2c1-798c2155c41d/extract-content/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.068474 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26_77b55606-9c38-4ca2-8192-d8845aa50a7e/util/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.260782 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26_77b55606-9c38-4ca2-8192-d8845aa50a7e/pull/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.311044 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26_77b55606-9c38-4ca2-8192-d8845aa50a7e/pull/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.313897 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26_77b55606-9c38-4ca2-8192-d8845aa50a7e/util/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.520434 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wk4zf_be0ce93d-b322-42ac-b2c1-798c2155c41d/registry-server/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.539735 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26_77b55606-9c38-4ca2-8192-d8845aa50a7e/pull/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.563999 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26_77b55606-9c38-4ca2-8192-d8845aa50a7e/extract/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.595950 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f938df2ce267491f058ea7e3036e97ee3f65bf3665185b1a4f52323ecawtw26_77b55606-9c38-4ca2-8192-d8845aa50a7e/util/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.811960 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6nb8t_379d499a-4ed6-4e79-ae35-e934ca28ab85/marketplace-operator/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.824901 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-67snq_99a6afe6-d571-490a-b304-1e8727a3b41c/extract-utilities/0.log" Feb 16 15:46:33 crc kubenswrapper[4748]: I0216 15:46:33.983708 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-67snq_99a6afe6-d571-490a-b304-1e8727a3b41c/extract-utilities/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.024202 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-67snq_99a6afe6-d571-490a-b304-1e8727a3b41c/extract-content/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.034338 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-67snq_99a6afe6-d571-490a-b304-1e8727a3b41c/extract-content/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.158527 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-67snq_99a6afe6-d571-490a-b304-1e8727a3b41c/extract-utilities/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.165396 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-67snq_99a6afe6-d571-490a-b304-1e8727a3b41c/extract-content/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.264325 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wcmxf_635b0481-4777-4121-aac7-e967c93db3fe/extract-utilities/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.273001 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-67snq_99a6afe6-d571-490a-b304-1e8727a3b41c/registry-server/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.400906 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wcmxf_635b0481-4777-4121-aac7-e967c93db3fe/extract-content/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.402875 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wcmxf_635b0481-4777-4121-aac7-e967c93db3fe/extract-content/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.434133 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wcmxf_635b0481-4777-4121-aac7-e967c93db3fe/extract-utilities/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.571088 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wcmxf_635b0481-4777-4121-aac7-e967c93db3fe/extract-content/0.log" Feb 16 15:46:34 crc kubenswrapper[4748]: I0216 15:46:34.599865 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wcmxf_635b0481-4777-4121-aac7-e967c93db3fe/extract-utilities/0.log" Feb 16 15:46:35 crc kubenswrapper[4748]: I0216 15:46:35.092445 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-wcmxf_635b0481-4777-4121-aac7-e967c93db3fe/registry-server/0.log" Feb 16 15:46:35 crc kubenswrapper[4748]: E0216 15:46:35.996870 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:46:41 crc kubenswrapper[4748]: I0216 15:46:41.995249 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:46:41 crc kubenswrapper[4748]: E0216 15:46:41.996001 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:46:47 crc kubenswrapper[4748]: I0216 15:46:47.692118 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d95cd54c6-8ltc7_a879c65c-71d1-4772-b4cc-6d30cbc5210f/prometheus-operator-admission-webhook/0.log" Feb 16 15:46:47 crc kubenswrapper[4748]: I0216 15:46:47.722805 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-d95cd54c6-lkqmt_e3c1d3ac-7949-4827-8cd5-e0ac8f8d280c/prometheus-operator-admission-webhook/0.log" Feb 16 15:46:47 crc kubenswrapper[4748]: I0216 15:46:47.792652 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-6scp6_f16a6a7b-f861-465e-bfc7-6c94642de504/prometheus-operator/0.log" Feb 16 15:46:47 crc kubenswrapper[4748]: I0216 15:46:47.910624 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-9vb7j_c3019993-87d0-4427-a928-fa01e0a0f419/perses-operator/0.log" Feb 16 15:46:47 crc kubenswrapper[4748]: I0216 15:46:47.957638 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-6x2nh_9060b85c-a31e-4caa-9552-e2d2a4e7cba5/operator/0.log" Feb 16 15:46:48 crc kubenswrapper[4748]: E0216 15:46:48.998238 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:46:56 crc kubenswrapper[4748]: I0216 15:46:56.994791 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:46:56 crc kubenswrapper[4748]: E0216 15:46:56.995955 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:47:01 crc kubenswrapper[4748]: I0216 15:47:01.087243 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-668f94f855-4cdp6_0ef4556c-a65b-4be7-9b8e-36f4423e84b1/kube-rbac-proxy/0.log" Feb 16 15:47:01 crc kubenswrapper[4748]: I0216 15:47:01.159325 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-668f94f855-4cdp6_0ef4556c-a65b-4be7-9b8e-36f4423e84b1/manager/0.log" Feb 16 15:47:03 crc kubenswrapper[4748]: E0216 15:47:03.997961 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:47:09 crc kubenswrapper[4748]: I0216 15:47:09.994942 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:47:09 crc kubenswrapper[4748]: E0216 15:47:09.995636 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:47:16 crc kubenswrapper[4748]: E0216 15:47:16.995705 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:47:22 crc kubenswrapper[4748]: I0216 15:47:22.994864 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:47:22 crc kubenswrapper[4748]: E0216 15:47:22.995633 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:47:27 crc kubenswrapper[4748]: E0216 15:47:27.998101 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:47:33 crc kubenswrapper[4748]: I0216 15:47:33.995108 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:47:33 crc kubenswrapper[4748]: E0216 15:47:33.996097 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.825871 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k52dx"] Feb 16 15:47:38 crc kubenswrapper[4748]: E0216 15:47:38.827113 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="extract-utilities" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.827136 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="extract-utilities" Feb 16 15:47:38 crc kubenswrapper[4748]: E0216 15:47:38.827159 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="extract-content" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.827172 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="extract-content" Feb 16 15:47:38 crc kubenswrapper[4748]: E0216 15:47:38.827221 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="registry-server" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.827235 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="registry-server" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.827590 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fddec0-5f36-4ac6-b2e2-023a16992cd1" containerName="registry-server" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.830428 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.841827 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k52dx"] Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.938499 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-kube-api-access-fvd2k\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.938574 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-catalog-content\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:38 crc kubenswrapper[4748]: I0216 15:47:38.939377 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-utilities\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.041837 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-utilities\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.041914 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-kube-api-access-fvd2k\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.041952 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-catalog-content\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.042560 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-catalog-content\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.042700 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-utilities\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.072915 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-kube-api-access-fvd2k\") pod \"redhat-operators-k52dx\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.159077 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:39 crc kubenswrapper[4748]: I0216 15:47:39.680535 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k52dx"] Feb 16 15:47:40 crc kubenswrapper[4748]: I0216 15:47:40.508920 4748 generic.go:334] "Generic (PLEG): container finished" podID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerID="c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0" exitCode=0 Feb 16 15:47:40 crc kubenswrapper[4748]: I0216 15:47:40.508969 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k52dx" event={"ID":"5a18aca8-c8d5-4cd1-a6f2-620318f726c3","Type":"ContainerDied","Data":"c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0"} Feb 16 15:47:40 crc kubenswrapper[4748]: I0216 15:47:40.508996 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k52dx" event={"ID":"5a18aca8-c8d5-4cd1-a6f2-620318f726c3","Type":"ContainerStarted","Data":"d799149a87c4cf88728380ecd3759e29c8a7b9c3b820f3707ef6f47192f185f1"} Feb 16 15:47:41 crc kubenswrapper[4748]: I0216 15:47:41.525105 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k52dx" event={"ID":"5a18aca8-c8d5-4cd1-a6f2-620318f726c3","Type":"ContainerStarted","Data":"20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c"} Feb 16 15:47:42 crc kubenswrapper[4748]: I0216 15:47:42.538258 4748 generic.go:334] "Generic (PLEG): container finished" podID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerID="20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c" exitCode=0 Feb 16 15:47:42 crc kubenswrapper[4748]: I0216 15:47:42.538467 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k52dx" event={"ID":"5a18aca8-c8d5-4cd1-a6f2-620318f726c3","Type":"ContainerDied","Data":"20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c"} Feb 16 15:47:42 crc kubenswrapper[4748]: E0216 15:47:42.998386 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:47:43 crc kubenswrapper[4748]: I0216 15:47:43.561657 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k52dx" event={"ID":"5a18aca8-c8d5-4cd1-a6f2-620318f726c3","Type":"ContainerStarted","Data":"e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803"} Feb 16 15:47:43 crc kubenswrapper[4748]: I0216 15:47:43.604792 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k52dx" podStartSLOduration=3.176638737 podStartE2EDuration="5.604764407s" podCreationTimestamp="2026-02-16 15:47:38 +0000 UTC" firstStartedPulling="2026-02-16 15:47:40.511071005 +0000 UTC m=+3286.202740054" lastFinishedPulling="2026-02-16 15:47:42.939196645 +0000 UTC m=+3288.630865724" observedRunningTime="2026-02-16 15:47:43.591782156 +0000 UTC m=+3289.283451265" watchObservedRunningTime="2026-02-16 15:47:43.604764407 +0000 UTC m=+3289.296433476" Feb 16 15:47:47 crc kubenswrapper[4748]: I0216 15:47:47.994930 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:47:47 crc kubenswrapper[4748]: E0216 15:47:47.996063 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:47:49 crc kubenswrapper[4748]: I0216 15:47:49.159394 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:49 crc kubenswrapper[4748]: I0216 15:47:49.159697 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:50 crc kubenswrapper[4748]: I0216 15:47:50.242513 4748 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k52dx" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="registry-server" probeResult="failure" output=< Feb 16 15:47:50 crc kubenswrapper[4748]: timeout: failed to connect service ":50051" within 1s Feb 16 15:47:50 crc kubenswrapper[4748]: > Feb 16 15:47:57 crc kubenswrapper[4748]: E0216 15:47:57.996906 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:47:59 crc kubenswrapper[4748]: I0216 15:47:59.228613 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:59 crc kubenswrapper[4748]: I0216 15:47:59.302098 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:47:59 crc kubenswrapper[4748]: I0216 15:47:59.482239 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k52dx"] Feb 16 15:48:00 crc kubenswrapper[4748]: I0216 15:48:00.762805 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k52dx" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="registry-server" containerID="cri-o://e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803" gracePeriod=2 Feb 16 15:48:00 crc kubenswrapper[4748]: I0216 15:48:00.995920 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:48:00 crc kubenswrapper[4748]: E0216 15:48:00.998858 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.347295 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.495749 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-catalog-content\") pod \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.495926 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-kube-api-access-fvd2k\") pod \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.495971 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-utilities\") pod \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\" (UID: \"5a18aca8-c8d5-4cd1-a6f2-620318f726c3\") " Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.496757 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-utilities" (OuterVolumeSpecName: "utilities") pod "5a18aca8-c8d5-4cd1-a6f2-620318f726c3" (UID: "5a18aca8-c8d5-4cd1-a6f2-620318f726c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.497266 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.502229 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-kube-api-access-fvd2k" (OuterVolumeSpecName: "kube-api-access-fvd2k") pod "5a18aca8-c8d5-4cd1-a6f2-620318f726c3" (UID: "5a18aca8-c8d5-4cd1-a6f2-620318f726c3"). InnerVolumeSpecName "kube-api-access-fvd2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.600070 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvd2k\" (UniqueName: \"kubernetes.io/projected/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-kube-api-access-fvd2k\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.607232 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a18aca8-c8d5-4cd1-a6f2-620318f726c3" (UID: "5a18aca8-c8d5-4cd1-a6f2-620318f726c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.702406 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a18aca8-c8d5-4cd1-a6f2-620318f726c3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.777020 4748 generic.go:334] "Generic (PLEG): container finished" podID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerID="e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803" exitCode=0 Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.777099 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k52dx" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.777109 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k52dx" event={"ID":"5a18aca8-c8d5-4cd1-a6f2-620318f726c3","Type":"ContainerDied","Data":"e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803"} Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.777618 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k52dx" event={"ID":"5a18aca8-c8d5-4cd1-a6f2-620318f726c3","Type":"ContainerDied","Data":"d799149a87c4cf88728380ecd3759e29c8a7b9c3b820f3707ef6f47192f185f1"} Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.777671 4748 scope.go:117] "RemoveContainer" containerID="e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.840926 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k52dx"] Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.842097 4748 scope.go:117] "RemoveContainer" containerID="20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.854449 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k52dx"] Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.885339 4748 scope.go:117] "RemoveContainer" containerID="c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.928871 4748 scope.go:117] "RemoveContainer" containerID="e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803" Feb 16 15:48:01 crc kubenswrapper[4748]: E0216 15:48:01.929216 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803\": container with ID starting with e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803 not found: ID does not exist" containerID="e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.929245 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803"} err="failed to get container status \"e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803\": rpc error: code = NotFound desc = could not find container \"e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803\": container with ID starting with e22394742420ca4ee8dfed17dbb9bcf88dfa5b9b4a7376edd9abd7a85f703803 not found: ID does not exist" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.929264 4748 scope.go:117] "RemoveContainer" containerID="20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c" Feb 16 15:48:01 crc kubenswrapper[4748]: E0216 15:48:01.929754 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c\": container with ID starting with 20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c not found: ID does not exist" containerID="20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.929804 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c"} err="failed to get container status \"20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c\": rpc error: code = NotFound desc = could not find container \"20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c\": container with ID starting with 20e872a32717c9f065c8010fca365cb6fbd7be88bc6a4094ee1faf77d7b27d4c not found: ID does not exist" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.929841 4748 scope.go:117] "RemoveContainer" containerID="c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0" Feb 16 15:48:01 crc kubenswrapper[4748]: E0216 15:48:01.930167 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0\": container with ID starting with c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0 not found: ID does not exist" containerID="c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0" Feb 16 15:48:01 crc kubenswrapper[4748]: I0216 15:48:01.930193 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0"} err="failed to get container status \"c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0\": rpc error: code = NotFound desc = could not find container \"c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0\": container with ID starting with c3fe1300e4cd68ba0264a3b3a6ffcbe1bda8b56d843cb5390ad9c7f03b50f9e0 not found: ID does not exist" Feb 16 15:48:03 crc kubenswrapper[4748]: I0216 15:48:03.015748 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" path="/var/lib/kubelet/pods/5a18aca8-c8d5-4cd1-a6f2-620318f726c3/volumes" Feb 16 15:48:09 crc kubenswrapper[4748]: E0216 15:48:09.027126 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:48:11 crc kubenswrapper[4748]: I0216 15:48:11.994422 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:48:11 crc kubenswrapper[4748]: E0216 15:48:11.995285 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:48:21 crc kubenswrapper[4748]: E0216 15:48:21.998057 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:48:25 crc kubenswrapper[4748]: I0216 15:48:25.995137 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:48:25 crc kubenswrapper[4748]: E0216 15:48:25.996192 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:48:28 crc kubenswrapper[4748]: I0216 15:48:28.080354 4748 generic.go:334] "Generic (PLEG): container finished" podID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerID="31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c" exitCode=0 Feb 16 15:48:28 crc kubenswrapper[4748]: I0216 15:48:28.080427 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" event={"ID":"04b82838-0ea9-48cb-9883-fa59c3fe3595","Type":"ContainerDied","Data":"31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c"} Feb 16 15:48:28 crc kubenswrapper[4748]: I0216 15:48:28.081539 4748 scope.go:117] "RemoveContainer" containerID="31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c" Feb 16 15:48:28 crc kubenswrapper[4748]: I0216 15:48:28.976015 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qfqlr_must-gather-g2lhk_04b82838-0ea9-48cb-9883-fa59c3fe3595/gather/0.log" Feb 16 15:48:36 crc kubenswrapper[4748]: I0216 15:48:36.995237 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:48:36 crc kubenswrapper[4748]: E0216 15:48:36.996165 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:48:36 crc kubenswrapper[4748]: E0216 15:48:36.996671 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.380182 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qfqlr/must-gather-g2lhk"] Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.380877 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerName="copy" containerID="cri-o://9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98" gracePeriod=2 Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.393254 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qfqlr/must-gather-g2lhk"] Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.869727 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qfqlr_must-gather-g2lhk_04b82838-0ea9-48cb-9883-fa59c3fe3595/copy/0.log" Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.870369 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.922188 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc56l\" (UniqueName: \"kubernetes.io/projected/04b82838-0ea9-48cb-9883-fa59c3fe3595-kube-api-access-cc56l\") pod \"04b82838-0ea9-48cb-9883-fa59c3fe3595\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.922360 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/04b82838-0ea9-48cb-9883-fa59c3fe3595-must-gather-output\") pod \"04b82838-0ea9-48cb-9883-fa59c3fe3595\" (UID: \"04b82838-0ea9-48cb-9883-fa59c3fe3595\") " Feb 16 15:48:37 crc kubenswrapper[4748]: I0216 15:48:37.928983 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04b82838-0ea9-48cb-9883-fa59c3fe3595-kube-api-access-cc56l" (OuterVolumeSpecName: "kube-api-access-cc56l") pod "04b82838-0ea9-48cb-9883-fa59c3fe3595" (UID: "04b82838-0ea9-48cb-9883-fa59c3fe3595"). InnerVolumeSpecName "kube-api-access-cc56l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.025302 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc56l\" (UniqueName: \"kubernetes.io/projected/04b82838-0ea9-48cb-9883-fa59c3fe3595-kube-api-access-cc56l\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.073609 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04b82838-0ea9-48cb-9883-fa59c3fe3595-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "04b82838-0ea9-48cb-9883-fa59c3fe3595" (UID: "04b82838-0ea9-48cb-9883-fa59c3fe3595"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.127579 4748 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/04b82838-0ea9-48cb-9883-fa59c3fe3595-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.179363 4748 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qfqlr_must-gather-g2lhk_04b82838-0ea9-48cb-9883-fa59c3fe3595/copy/0.log" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.179886 4748 generic.go:334] "Generic (PLEG): container finished" podID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerID="9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98" exitCode=143 Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.179960 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qfqlr/must-gather-g2lhk" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.179968 4748 scope.go:117] "RemoveContainer" containerID="9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.218600 4748 scope.go:117] "RemoveContainer" containerID="31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.275094 4748 scope.go:117] "RemoveContainer" containerID="9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98" Feb 16 15:48:38 crc kubenswrapper[4748]: E0216 15:48:38.275602 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98\": container with ID starting with 9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98 not found: ID does not exist" containerID="9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.275662 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98"} err="failed to get container status \"9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98\": rpc error: code = NotFound desc = could not find container \"9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98\": container with ID starting with 9afb94f25d92b104ad312f16aea3b82372af0a50aec4ab6be3ff674a36bfaa98 not found: ID does not exist" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.275689 4748 scope.go:117] "RemoveContainer" containerID="31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c" Feb 16 15:48:38 crc kubenswrapper[4748]: E0216 15:48:38.276104 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c\": container with ID starting with 31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c not found: ID does not exist" containerID="31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c" Feb 16 15:48:38 crc kubenswrapper[4748]: I0216 15:48:38.276161 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c"} err="failed to get container status \"31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c\": rpc error: code = NotFound desc = could not find container \"31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c\": container with ID starting with 31a581b9575ac71dd6413db344f9c3f0d8b0ff4758ac910027cb6140de8a426c not found: ID does not exist" Feb 16 15:48:39 crc kubenswrapper[4748]: I0216 15:48:39.009593 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" path="/var/lib/kubelet/pods/04b82838-0ea9-48cb-9883-fa59c3fe3595/volumes" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.977398 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5vqww"] Feb 16 15:48:43 crc kubenswrapper[4748]: E0216 15:48:43.978299 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="extract-utilities" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978315 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="extract-utilities" Feb 16 15:48:43 crc kubenswrapper[4748]: E0216 15:48:43.978336 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="registry-server" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978344 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="registry-server" Feb 16 15:48:43 crc kubenswrapper[4748]: E0216 15:48:43.978373 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerName="copy" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978381 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerName="copy" Feb 16 15:48:43 crc kubenswrapper[4748]: E0216 15:48:43.978405 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerName="gather" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978412 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerName="gather" Feb 16 15:48:43 crc kubenswrapper[4748]: E0216 15:48:43.978422 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="extract-content" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978430 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="extract-content" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978653 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerName="copy" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978681 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a18aca8-c8d5-4cd1-a6f2-620318f726c3" containerName="registry-server" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.978703 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="04b82838-0ea9-48cb-9883-fa59c3fe3595" containerName="gather" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.980366 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:43 crc kubenswrapper[4748]: I0216 15:48:43.996429 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5vqww"] Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.063812 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phdmg\" (UniqueName: \"kubernetes.io/projected/f52ce3fd-38f6-4102-b5d2-a018720f080c-kube-api-access-phdmg\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.063872 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-utilities\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.063950 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-catalog-content\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.165727 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-utilities\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.165906 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-catalog-content\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.166113 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-utilities\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.166374 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-catalog-content\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.167909 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phdmg\" (UniqueName: \"kubernetes.io/projected/f52ce3fd-38f6-4102-b5d2-a018720f080c-kube-api-access-phdmg\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.187308 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phdmg\" (UniqueName: \"kubernetes.io/projected/f52ce3fd-38f6-4102-b5d2-a018720f080c-kube-api-access-phdmg\") pod \"community-operators-5vqww\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.308343 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:44 crc kubenswrapper[4748]: I0216 15:48:44.798984 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5vqww"] Feb 16 15:48:45 crc kubenswrapper[4748]: I0216 15:48:45.265393 4748 generic.go:334] "Generic (PLEG): container finished" podID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerID="59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf" exitCode=0 Feb 16 15:48:45 crc kubenswrapper[4748]: I0216 15:48:45.265482 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vqww" event={"ID":"f52ce3fd-38f6-4102-b5d2-a018720f080c","Type":"ContainerDied","Data":"59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf"} Feb 16 15:48:45 crc kubenswrapper[4748]: I0216 15:48:45.265747 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vqww" event={"ID":"f52ce3fd-38f6-4102-b5d2-a018720f080c","Type":"ContainerStarted","Data":"c02472a4834c2a29f58b8095e198213033d3d72cc48ae56d9a2b565eff446c5e"} Feb 16 15:48:46 crc kubenswrapper[4748]: I0216 15:48:46.274969 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vqww" event={"ID":"f52ce3fd-38f6-4102-b5d2-a018720f080c","Type":"ContainerStarted","Data":"b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be"} Feb 16 15:48:47 crc kubenswrapper[4748]: I0216 15:48:47.285063 4748 generic.go:334] "Generic (PLEG): container finished" podID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerID="b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be" exitCode=0 Feb 16 15:48:47 crc kubenswrapper[4748]: I0216 15:48:47.285144 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vqww" event={"ID":"f52ce3fd-38f6-4102-b5d2-a018720f080c","Type":"ContainerDied","Data":"b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be"} Feb 16 15:48:47 crc kubenswrapper[4748]: I0216 15:48:47.997536 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:48:47 crc kubenswrapper[4748]: E0216 15:48:47.998165 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:48:48 crc kubenswrapper[4748]: E0216 15:48:48.007500 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:48:48 crc kubenswrapper[4748]: I0216 15:48:48.294539 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vqww" event={"ID":"f52ce3fd-38f6-4102-b5d2-a018720f080c","Type":"ContainerStarted","Data":"012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397"} Feb 16 15:48:48 crc kubenswrapper[4748]: I0216 15:48:48.322054 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5vqww" podStartSLOduration=2.8871077659999997 podStartE2EDuration="5.322035004s" podCreationTimestamp="2026-02-16 15:48:43 +0000 UTC" firstStartedPulling="2026-02-16 15:48:45.267408496 +0000 UTC m=+3350.959077545" lastFinishedPulling="2026-02-16 15:48:47.702335714 +0000 UTC m=+3353.394004783" observedRunningTime="2026-02-16 15:48:48.31661771 +0000 UTC m=+3354.008286769" watchObservedRunningTime="2026-02-16 15:48:48.322035004 +0000 UTC m=+3354.013704043" Feb 16 15:48:54 crc kubenswrapper[4748]: I0216 15:48:54.308876 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:54 crc kubenswrapper[4748]: I0216 15:48:54.309461 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:54 crc kubenswrapper[4748]: I0216 15:48:54.364589 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:54 crc kubenswrapper[4748]: I0216 15:48:54.425108 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:54 crc kubenswrapper[4748]: I0216 15:48:54.599076 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5vqww"] Feb 16 15:48:56 crc kubenswrapper[4748]: I0216 15:48:56.391638 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5vqww" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="registry-server" containerID="cri-o://012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397" gracePeriod=2 Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.120025 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.194698 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-utilities\") pod \"f52ce3fd-38f6-4102-b5d2-a018720f080c\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.194891 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phdmg\" (UniqueName: \"kubernetes.io/projected/f52ce3fd-38f6-4102-b5d2-a018720f080c-kube-api-access-phdmg\") pod \"f52ce3fd-38f6-4102-b5d2-a018720f080c\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.195065 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-catalog-content\") pod \"f52ce3fd-38f6-4102-b5d2-a018720f080c\" (UID: \"f52ce3fd-38f6-4102-b5d2-a018720f080c\") " Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.195651 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-utilities" (OuterVolumeSpecName: "utilities") pod "f52ce3fd-38f6-4102-b5d2-a018720f080c" (UID: "f52ce3fd-38f6-4102-b5d2-a018720f080c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.202481 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f52ce3fd-38f6-4102-b5d2-a018720f080c-kube-api-access-phdmg" (OuterVolumeSpecName: "kube-api-access-phdmg") pod "f52ce3fd-38f6-4102-b5d2-a018720f080c" (UID: "f52ce3fd-38f6-4102-b5d2-a018720f080c"). InnerVolumeSpecName "kube-api-access-phdmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.265273 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f52ce3fd-38f6-4102-b5d2-a018720f080c" (UID: "f52ce3fd-38f6-4102-b5d2-a018720f080c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.298194 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.298223 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phdmg\" (UniqueName: \"kubernetes.io/projected/f52ce3fd-38f6-4102-b5d2-a018720f080c-kube-api-access-phdmg\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.298235 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f52ce3fd-38f6-4102-b5d2-a018720f080c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.406408 4748 generic.go:334] "Generic (PLEG): container finished" podID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerID="012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397" exitCode=0 Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.406455 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vqww" event={"ID":"f52ce3fd-38f6-4102-b5d2-a018720f080c","Type":"ContainerDied","Data":"012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397"} Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.406486 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5vqww" event={"ID":"f52ce3fd-38f6-4102-b5d2-a018720f080c","Type":"ContainerDied","Data":"c02472a4834c2a29f58b8095e198213033d3d72cc48ae56d9a2b565eff446c5e"} Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.406505 4748 scope.go:117] "RemoveContainer" containerID="012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.406548 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5vqww" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.439444 4748 scope.go:117] "RemoveContainer" containerID="b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.440278 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5vqww"] Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.450671 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5vqww"] Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.458019 4748 scope.go:117] "RemoveContainer" containerID="59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.478332 4748 scope.go:117] "RemoveContainer" containerID="012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397" Feb 16 15:48:57 crc kubenswrapper[4748]: E0216 15:48:57.478658 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397\": container with ID starting with 012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397 not found: ID does not exist" containerID="012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.478704 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397"} err="failed to get container status \"012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397\": rpc error: code = NotFound desc = could not find container \"012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397\": container with ID starting with 012512ac4986938256f145112c1a2851c4e7555c30fd70872901106f0af51397 not found: ID does not exist" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.478752 4748 scope.go:117] "RemoveContainer" containerID="b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be" Feb 16 15:48:57 crc kubenswrapper[4748]: E0216 15:48:57.479289 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be\": container with ID starting with b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be not found: ID does not exist" containerID="b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.479321 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be"} err="failed to get container status \"b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be\": rpc error: code = NotFound desc = could not find container \"b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be\": container with ID starting with b582734ab647e4fe8c94508752ece464a40a7b74516db5e4fb2ef7b1270bf6be not found: ID does not exist" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.479343 4748 scope.go:117] "RemoveContainer" containerID="59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf" Feb 16 15:48:57 crc kubenswrapper[4748]: E0216 15:48:57.479637 4748 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf\": container with ID starting with 59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf not found: ID does not exist" containerID="59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf" Feb 16 15:48:57 crc kubenswrapper[4748]: I0216 15:48:57.479678 4748 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf"} err="failed to get container status \"59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf\": rpc error: code = NotFound desc = could not find container \"59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf\": container with ID starting with 59e552419b1c191bb5ef70208d0ac886c16884945de51d87c3a455fb6582c0bf not found: ID does not exist" Feb 16 15:48:58 crc kubenswrapper[4748]: I0216 15:48:58.995669 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:48:58 crc kubenswrapper[4748]: E0216 15:48:58.996912 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:48:59 crc kubenswrapper[4748]: I0216 15:48:59.009836 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" path="/var/lib/kubelet/pods/f52ce3fd-38f6-4102-b5d2-a018720f080c/volumes" Feb 16 15:49:00 crc kubenswrapper[4748]: E0216 15:49:00.997481 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:49:10 crc kubenswrapper[4748]: I0216 15:49:10.995775 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:49:10 crc kubenswrapper[4748]: E0216 15:49:10.997238 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:49:12 crc kubenswrapper[4748]: E0216 15:49:12.996115 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:49:25 crc kubenswrapper[4748]: I0216 15:49:25.994639 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:49:25 crc kubenswrapper[4748]: E0216 15:49:25.995959 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-p7ttg_openshift-machine-config-operator(fafb0b41-fe7a-4d57-a714-4666580d6ae6)\"" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" Feb 16 15:49:27 crc kubenswrapper[4748]: E0216 15:49:27.996135 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:49:38 crc kubenswrapper[4748]: I0216 15:49:38.994502 4748 scope.go:117] "RemoveContainer" containerID="508f10b8fa633800e3715bf23965f0408b7c1d3f8aedb9dc7fd091603027690b" Feb 16 15:49:39 crc kubenswrapper[4748]: I0216 15:49:39.843175 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" event={"ID":"fafb0b41-fe7a-4d57-a714-4666580d6ae6","Type":"ContainerStarted","Data":"13a12666a8b212faf2bc64d65652b5ae177a6b175000214ba7f8da3d38c251a3"} Feb 16 15:49:39 crc kubenswrapper[4748]: E0216 15:49:39.998815 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:49:52 crc kubenswrapper[4748]: I0216 15:49:52.996857 4748 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 15:49:53 crc kubenswrapper[4748]: E0216 15:49:53.121801 4748 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:49:53 crc kubenswrapper[4748]: E0216 15:49:53.121870 4748 kuberuntime_image.go:55] "Failed to pull image" err="initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 16 15:49:53 crc kubenswrapper[4748]: E0216 15:49:53.122011 4748 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhpm6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-8skr8_openstack(67fe68e5-f0bc-406d-8880-9c39649848de): ErrImagePull: initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine" logger="UnhandledError" Feb 16 15:49:53 crc kubenswrapper[4748]: E0216 15:49:53.123334 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"initializing source docker://quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current: reading manifest current in quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api: unknown: Tag current was deleted or has expired. To pull, revive via time machine\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:50:06 crc kubenswrapper[4748]: E0216 15:50:06.998227 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:50:21 crc kubenswrapper[4748]: E0216 15:50:21.007493 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:50:33 crc kubenswrapper[4748]: E0216 15:50:33.996963 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:50:45 crc kubenswrapper[4748]: E0216 15:50:45.997870 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.134324 4748 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5fhf"] Feb 16 15:50:53 crc kubenswrapper[4748]: E0216 15:50:53.135675 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="extract-utilities" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.135700 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="extract-utilities" Feb 16 15:50:53 crc kubenswrapper[4748]: E0216 15:50:53.135761 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="extract-content" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.135775 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="extract-content" Feb 16 15:50:53 crc kubenswrapper[4748]: E0216 15:50:53.135835 4748 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="registry-server" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.135849 4748 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="registry-server" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.136229 4748 memory_manager.go:354] "RemoveStaleState removing state" podUID="f52ce3fd-38f6-4102-b5d2-a018720f080c" containerName="registry-server" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.139336 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.150497 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5fhf"] Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.238571 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6w6n\" (UniqueName: \"kubernetes.io/projected/975bfe2d-3b97-40ee-ae7b-a29a730d335f-kube-api-access-t6w6n\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.238839 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-catalog-content\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.238887 4748 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-utilities\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.340886 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6w6n\" (UniqueName: \"kubernetes.io/projected/975bfe2d-3b97-40ee-ae7b-a29a730d335f-kube-api-access-t6w6n\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.341158 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-catalog-content\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.341215 4748 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-utilities\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.342040 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-catalog-content\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.342158 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-utilities\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.364558 4748 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6w6n\" (UniqueName: \"kubernetes.io/projected/975bfe2d-3b97-40ee-ae7b-a29a730d335f-kube-api-access-t6w6n\") pod \"certified-operators-r5fhf\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:53 crc kubenswrapper[4748]: I0216 15:50:53.482590 4748 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:50:54 crc kubenswrapper[4748]: I0216 15:50:54.047652 4748 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5fhf"] Feb 16 15:50:54 crc kubenswrapper[4748]: I0216 15:50:54.743450 4748 generic.go:334] "Generic (PLEG): container finished" podID="975bfe2d-3b97-40ee-ae7b-a29a730d335f" containerID="d88e226021d5143584640e7af8d63c9f9c5577c8c5b53fded4181f3f5b222c4e" exitCode=0 Feb 16 15:50:54 crc kubenswrapper[4748]: I0216 15:50:54.743503 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5fhf" event={"ID":"975bfe2d-3b97-40ee-ae7b-a29a730d335f","Type":"ContainerDied","Data":"d88e226021d5143584640e7af8d63c9f9c5577c8c5b53fded4181f3f5b222c4e"} Feb 16 15:50:54 crc kubenswrapper[4748]: I0216 15:50:54.743534 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5fhf" event={"ID":"975bfe2d-3b97-40ee-ae7b-a29a730d335f","Type":"ContainerStarted","Data":"3562faf9259f3e5007f2099c30eb0076bbc75fa2ad1dac3252f6a956e00ead8b"} Feb 16 15:50:55 crc kubenswrapper[4748]: I0216 15:50:55.756293 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5fhf" event={"ID":"975bfe2d-3b97-40ee-ae7b-a29a730d335f","Type":"ContainerStarted","Data":"53b37d2055a73d96aaeb5a4f66cf6ac4228dbf7307270e2e6e0edb0418b41c8d"} Feb 16 15:50:56 crc kubenswrapper[4748]: I0216 15:50:56.773223 4748 generic.go:334] "Generic (PLEG): container finished" podID="975bfe2d-3b97-40ee-ae7b-a29a730d335f" containerID="53b37d2055a73d96aaeb5a4f66cf6ac4228dbf7307270e2e6e0edb0418b41c8d" exitCode=0 Feb 16 15:50:56 crc kubenswrapper[4748]: I0216 15:50:56.773328 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5fhf" event={"ID":"975bfe2d-3b97-40ee-ae7b-a29a730d335f","Type":"ContainerDied","Data":"53b37d2055a73d96aaeb5a4f66cf6ac4228dbf7307270e2e6e0edb0418b41c8d"} Feb 16 15:50:57 crc kubenswrapper[4748]: I0216 15:50:57.783321 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5fhf" event={"ID":"975bfe2d-3b97-40ee-ae7b-a29a730d335f","Type":"ContainerStarted","Data":"b713abebc74cadd5e459bef348088dc8f33dd5bc5fe46395ebe47ff58b07cf35"} Feb 16 15:50:57 crc kubenswrapper[4748]: I0216 15:50:57.807316 4748 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5fhf" podStartSLOduration=2.1413890110000002 podStartE2EDuration="4.807301342s" podCreationTimestamp="2026-02-16 15:50:53 +0000 UTC" firstStartedPulling="2026-02-16 15:50:54.747456709 +0000 UTC m=+3480.439125748" lastFinishedPulling="2026-02-16 15:50:57.41336904 +0000 UTC m=+3483.105038079" observedRunningTime="2026-02-16 15:50:57.801827046 +0000 UTC m=+3483.493496105" watchObservedRunningTime="2026-02-16 15:50:57.807301342 +0000 UTC m=+3483.498970381" Feb 16 15:50:58 crc kubenswrapper[4748]: E0216 15:50:58.997801 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:51:03 crc kubenswrapper[4748]: I0216 15:51:03.483126 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:51:03 crc kubenswrapper[4748]: I0216 15:51:03.483947 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:51:03 crc kubenswrapper[4748]: I0216 15:51:03.566833 4748 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:51:03 crc kubenswrapper[4748]: I0216 15:51:03.898000 4748 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:51:04 crc kubenswrapper[4748]: I0216 15:51:04.906424 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5fhf"] Feb 16 15:51:05 crc kubenswrapper[4748]: I0216 15:51:05.861447 4748 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5fhf" podUID="975bfe2d-3b97-40ee-ae7b-a29a730d335f" containerName="registry-server" containerID="cri-o://b713abebc74cadd5e459bef348088dc8f33dd5bc5fe46395ebe47ff58b07cf35" gracePeriod=2 Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.874100 4748 generic.go:334] "Generic (PLEG): container finished" podID="975bfe2d-3b97-40ee-ae7b-a29a730d335f" containerID="b713abebc74cadd5e459bef348088dc8f33dd5bc5fe46395ebe47ff58b07cf35" exitCode=0 Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.874356 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5fhf" event={"ID":"975bfe2d-3b97-40ee-ae7b-a29a730d335f","Type":"ContainerDied","Data":"b713abebc74cadd5e459bef348088dc8f33dd5bc5fe46395ebe47ff58b07cf35"} Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.874838 4748 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5fhf" event={"ID":"975bfe2d-3b97-40ee-ae7b-a29a730d335f","Type":"ContainerDied","Data":"3562faf9259f3e5007f2099c30eb0076bbc75fa2ad1dac3252f6a956e00ead8b"} Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.874860 4748 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3562faf9259f3e5007f2099c30eb0076bbc75fa2ad1dac3252f6a956e00ead8b" Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.906623 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.961236 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6w6n\" (UniqueName: \"kubernetes.io/projected/975bfe2d-3b97-40ee-ae7b-a29a730d335f-kube-api-access-t6w6n\") pod \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.961348 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-catalog-content\") pod \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.961557 4748 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-utilities\") pod \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\" (UID: \"975bfe2d-3b97-40ee-ae7b-a29a730d335f\") " Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.962686 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-utilities" (OuterVolumeSpecName: "utilities") pod "975bfe2d-3b97-40ee-ae7b-a29a730d335f" (UID: "975bfe2d-3b97-40ee-ae7b-a29a730d335f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:06 crc kubenswrapper[4748]: I0216 15:51:06.970952 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/975bfe2d-3b97-40ee-ae7b-a29a730d335f-kube-api-access-t6w6n" (OuterVolumeSpecName: "kube-api-access-t6w6n") pod "975bfe2d-3b97-40ee-ae7b-a29a730d335f" (UID: "975bfe2d-3b97-40ee-ae7b-a29a730d335f"). InnerVolumeSpecName "kube-api-access-t6w6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 16 15:51:07 crc kubenswrapper[4748]: I0216 15:51:07.028996 4748 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "975bfe2d-3b97-40ee-ae7b-a29a730d335f" (UID: "975bfe2d-3b97-40ee-ae7b-a29a730d335f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 16 15:51:07 crc kubenswrapper[4748]: I0216 15:51:07.064927 4748 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6w6n\" (UniqueName: \"kubernetes.io/projected/975bfe2d-3b97-40ee-ae7b-a29a730d335f-kube-api-access-t6w6n\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:07 crc kubenswrapper[4748]: I0216 15:51:07.064979 4748 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:07 crc kubenswrapper[4748]: I0216 15:51:07.064991 4748 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/975bfe2d-3b97-40ee-ae7b-a29a730d335f-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 15:51:07 crc kubenswrapper[4748]: I0216 15:51:07.884777 4748 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5fhf" Feb 16 15:51:07 crc kubenswrapper[4748]: I0216 15:51:07.927071 4748 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5fhf"] Feb 16 15:51:07 crc kubenswrapper[4748]: I0216 15:51:07.936304 4748 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5fhf"] Feb 16 15:51:09 crc kubenswrapper[4748]: I0216 15:51:09.013561 4748 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="975bfe2d-3b97-40ee-ae7b-a29a730d335f" path="/var/lib/kubelet/pods/975bfe2d-3b97-40ee-ae7b-a29a730d335f/volumes" Feb 16 15:51:12 crc kubenswrapper[4748]: E0216 15:51:12.996852 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:51:25 crc kubenswrapper[4748]: E0216 15:51:25.999022 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:51:39 crc kubenswrapper[4748]: E0216 15:51:38.999302 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:51:53 crc kubenswrapper[4748]: E0216 15:51:53.997655 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:52:04 crc kubenswrapper[4748]: I0216 15:52:04.729702 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:52:04 crc kubenswrapper[4748]: I0216 15:52:04.730289 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:52:08 crc kubenswrapper[4748]: E0216 15:52:08.000147 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:52:23 crc kubenswrapper[4748]: E0216 15:52:22.998208 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:52:34 crc kubenswrapper[4748]: I0216 15:52:34.729915 4748 patch_prober.go:28] interesting pod/machine-config-daemon-p7ttg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 15:52:34 crc kubenswrapper[4748]: I0216 15:52:34.730580 4748 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-p7ttg" podUID="fafb0b41-fe7a-4d57-a714-4666580d6ae6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 15:52:37 crc kubenswrapper[4748]: E0216 15:52:37.001278 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:52:48 crc kubenswrapper[4748]: E0216 15:52:48.998048 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" Feb 16 15:53:00 crc kubenswrapper[4748]: E0216 15:53:00.999535 4748 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-8skr8" podUID="67fe68e5-f0bc-406d-8880-9c39649848de" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144636552024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144636552017375 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144627170016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144627170015464 5ustar corecore